November 30, 2013

New Papers, Old Papers, and Re-convolved Concepts, November edition

I have been busy the past several months fleshing out new ideas and finishing up older ones. The first paper profiled here is "Cellular decision-making bias: the missing ingredient in cell functional diversity", something I published on arXiv [1] last month. This paper is a computational-oriented derivative of the paper "Defining phenotypic respecification diversity using multiple cell lines and reprogramming regimens", published earlier this year in Stem Cells and Development [2].



In [2], it was demonstrated that a series of different cell lines of the same type (e.g. fibroblast) exhibit great variability (many-fold differences) in terms of their direct cellular reprogramming efficiency. The efficiency of this process was measured using phenotypic (e.g. immunocytochemical) assays. This may or may not be due to the underlying genomic processes. Using a limited set of assays analyzed by means of differential gene expression, no smoking gun was found. While we did not investigate candidate epigenetic markers, the phenotypic trend was nevertheless consistent for both human and mouse cells reprogrammed to both generic muscle fiber and generic dopaminergic neurons [3].



The data collected and analyzed here also sets up a series of computational investigations using a method derived from Signal Detection Theory (SDT) and other signal-to-noise characterization methods [4]. SDT is generally used to understand cognitive decision-making in humans and animals. However, decision-making theory has also been used to explain outcomes at the cellular and molecular level, particularly switch-like processes [5]. Using the standard SDT as inspiration, I propose in [1] that cellular and molecular processes can be characterized and analyzed using a technique called cellular SDT.


Major collaborator on the Stem Cells and Development paper [2]: Dr. Steven Suhr, Michigan State University. 

Cellular SDT can uncover something called decision-making bias, which is hypothesized to occur during the conversion of cells from one phenotype to another [3]. In this case, the term bias refers to the magnitude of difference in conversion efficiency for the same cell line given two distinct stimuli. The overarching assumption is that differences observed across different small-scale stimuli (e.g. forced transcription factor activity) can be characterized systematically within and between specific cell types and lines.

My talk to the BEACON Center in May 2013. The first part (YouTube video) focused on modeling diversity in cellular reprogramming (an early version of cellular decision-making bias).

Here is the abstract of the paper. Associated code (on Github) can be found here:
"Cell functional diversity is a significant determinant on how biological processes unfold. Most accounts of diversity involve a search for sequence or expression differences. Perhaps there are more subtle mechanisms at work. Using the metaphor of information processing and decision-making might provide a clearer view of these subtleties. Understanding adaptive and transformative processes (such as cellular reprogramming) as a series of simple decisions allows us to use a technique called cellular signal detection theory (cellular SDT) to detect potential bias in mechanisms that favor one outcome over another. We can apply method of detecting cellular reprogramming bias to cellular reprogramming and other complex molecular processes. To demonstrate the scope of this method, we will critically examine differences between cell phenotypes reprogrammed to muscle fiber and neuron phenotypes. In cases where the signature of phenotypic bias is cryptic, signatures of genomic bias (pre-existing and induced) may provide an alternative. The examination of these alternates will be explored using data from a series of fibroblast cell lines before cellular reprogramming (pre-existing) and differences between fractions of cellular RNA for individual genes after drug treatment (induced). In conclusion, the usefulness and limitations of this method and associated analogies will be discussed."


The second paper profiled here is called "A Semi-automated Peer-review System", a short paper I published on the arXiv earlier this month [6]. The idea of an automated peer review system came to me after preparing a blog post [7] and reading a paper on the most common degree of novelty found among highly influential scientific papers [8]. The paper provides an outline of a human-assisted adaptive algorithm that detects fraud in a set of scientific papers without also filtering out innovative but highly-novel work. As in the case in [1], the approach was based on signal detection theory (SDT). In this case, however, a more conventional application (e.g. standard ROC curves) is used to minimize the number of truly low quality and fraudulent manuscripts while maintaining diversity and novelty in the scientific literature.


Here is the abstract and here is the associated code (mostly pseudo-code) on Github:
"A semi-supervised model of peer review is introduced that is intended to overcome the bias and incompleteness of traditional peer review. Traditional approaches are reliant on human biases, while consensus decision-making is constrained by sparse information. Here, the architecture for one potential improvement (a semi-supervised, human-assisted classifier) to the traditional approach will be introduced and evaluated. To evaluate the potential advantages of such a system, hypothetical receiver operating characteristic (ROC) curves for both approaches will be assessed. This will provide more specific indications of how automation would be beneficial in the manuscript evaluation process. In conclusion, the implications for such a system on measurements of scientific impact and improving the quality of open submission repositories will be discussed". 

Finally, I am giving a presentation at the Network Frontiers Workshop at Northwestern University's NICO Institute on the 4th of December. The title of the talk is "From Switches to Convolution to Tangled Webs: evolving sub-optimal, subtle biological mechanisms". The work is an extension of my arXiv paper from 2011 [9] on Biological Rube Goldberg Machines (RGMs), something I also refer to as a convolution architecture. Here is the abstract and here is the associated code on Github:
"One way to understand complexity in biological networks is to isolate simple motifs like switches and bi-fans. However, this does not fully capture the outcomes of evolutionary processes. In this talk, I will introduce a class of process model called convolution architectures. These models demonstrate bricolage and ad-hoc formation of new mechanisms atop existing complexity. Unlike simple motifs (e.g. straightforward mechanisms), these models are intended to demonstrate how evolution can produce complex processes that operate in a sub-optimal fashion. The concept of convolution architectures can be extended to complex network topologies. Simple convolution architectures with evolutionary constraints and subject to natural selection can produce step lengths that deviate from optimal expectation. When convolution architectures are represented as components of bidirectional complex network topologies, these circuitous paths should become “spaghetti-fied”, as they are not explicitly constrained by inputs and outputs. This may also allow for itinerant and cyclic self-regulation resembling chaotic dynamics. The use of complex network topologies also allows us to better understand how higher-level constraints (e.g. hub formation, modularity, preferential attachment) affect the evolution of sub-optimality and subtlety. Such embedded convolution architectures are also useful for modeling physiological, economic, and social complexity". 

And last but not least, a new preprint server has come online called BioRxiv. BioRxiv (administered by Cold Spring Harbor Laboratory) accepts manuscripts from a number of biological disciplines, from Bioinformatics to Molecular Biology to Zoology. I kicked things off in the Zoology category with an older manuscript (originally presented at a conference in 2006) entitled "Filling up the Tree: considering the self-organization of avian roosting behavior" [10]. However, for more theoretical and interdisciplinary work such as the paper in [11], I still plan on using arXiv.



NOTES:

[1] Alicea, B.   Cellular decision-making bias: the missing ingredient in cell functional diversity. arXiv repository, arXiv: 1310:8268 [q-bio.QM] (2013).

[2] Alicea, B., Murthy, S., Keaton, S.A., Cobbett, P., Cibelli, J.B., and Suhr, S.T.   Defining phenotypic
respecification diversity using multiple cell lines and reprogramming regimens. Stem Cells and Development, 22(19), 2641-2654 (2013).

[3]  In this example, conversion refers to direct cellular reprogramming technique (e.g. the creation of iPS cells) that result in the creation of induced neural cells (iNCs) and induced skeletal muscle cells (iSMCs). However, conversion could also refer to carcinogenesis or developmental processes.

Figure 1 from Alicea et.al (2013). Frames A-D, immunocytochemical characterization of iNCs and iSMCs. Frames E-H, diversity in reprogramming efficiency for a range of cell lines.

[4] Schultz, S.R.   Signal-to-noise ratio in neuroscience. Scholarpedia, 2(6), 2046 (2007).

[5] Balazsi, G., van Oudenaarden, A., and Collins, J.J.   Cellular Decision-Making and Biological Noise: From Microbes to Mammals. Cell, 144(6), 910–925 (2011). 

[6] Alicea, B.   A Semi-automated Peer-review System. arXiv: 1311.2504 [cs.DL, cs.HC, cs.SI, physics.soc-ph] (2013).

[7] Alicea, B.   The Novelty-Consensus Dampening.   Synthetic Daisies blog, October 22 (2013). 

[8] Uzzi, B., Mukherjee, S., Stringer, M., and Jones, B.   Atypical Combinations and Scientific Impact. Science, 342, 468-472 (2013).

[9] Alicea,  B.   The ‘Machinery’ of  Biocomplexity:  understanding  non-optimal  architectures  in biological systems. arXiv repository, arXiv: 1104.3559 [nlin.AO, q-bio.QM, q-bio.PE] (2011).

[10] Alicea, B.   Filling up the Tree: considering the self-organization of avian roosting behavior. bioRxiv, doi:10.1101/000349 (2013).

[11] Alicea, B.   The Emergence of Animal Social Complexity: theoretical and biobehavioral evidence. arXiv repository, arxiv:1309.7990 [q-bio.PR, q-bio.NC] (2013).

November 24, 2013

Evolution, Variation, Development, and Strains of Artificial Life in the Reading Queue

This content is cross-posted to my micro-blog, Tumbld Thoughts. Many new papers on adaptation and evolution, plus a call for conference participation in Artifical Life XIV.

New Readings on Short- and Long-term Evolution from the Reading Queue


Here are a few new papers on experimental evolution. The first is a paper from Jeffrey Barrick and Rich Lenski [1], who utilize the long-term evolution experiment to look at genome dynamics during bacterial evolution [2]. The first figure shows the types of mutations observed during evolution (occurring on the scale of 103 generations). The second demonstrates the signatures of optimization, innovation, and epistasis in evolutionary change. Interestingly, a genomic analysis of bacterial populations from the same project suggests that adaptation proceeds without reaching so-called fitness peaks (which is predicted by theory to limit the fitness advantage of a given genotype).



The second paper is from Ted Garland, and involves using artificial selection [4] in mice to find the limits of evolution (or evolvability) over 10-100 generations [5, 6]. A wheel running task is used to assess physical performance. The first figure shows baseline performance, maximum evolved performance, and post-peak performance given genetic (G), environmental (E), and GxE sources of variation. The second figure shows differences in wheel running performance between male and female mice over 30 generations. In this case, behavioral analysis  reveals distinct limits to advantages gained from artificial selection (which are not always due to adaptation). 


New Readings on Human Variation from the Reading Queue


Here are four new papers on human genomic variation. The first [7] is a review of genome mosaicism, or variation across cells in the same human body. Mosaicism results from errors in either chromosome segregation during mitosis or DNA replication. In neurons from the frontal cortex [8], mosaicism is responsible for variation in chromosomal complements and copy number variants (CNVs). This variation comes in the form of aneuploidies, retrotransposons, and large-scale CNV differences (in 13-41% of neurons sampled). 


In [9], variation in chromatin states across the genome is explored. One finding suggests that variable regions are enriched in SNPs relative to nonvariable regions, which may be due to negative selection. The expression of heterozygous SNPs with allele-specific signals are highest for active marks. These is also variation in methylation switches (active/repressed or active/weakly active states) which results in enhancer and core promoter-specific states.


Finally, functional genomic elements can be more explicitly linked to chromatin signatures. This was done in [10] by finding the cis-regulatory variants that most affect chromatin states. In this study, five post-transcriptional modifiers and three transcription factors were used to show these trends across 14 individuals. It was found that allele-specific patterns of association (between genomic function and chromatin regulation) exist.

Calls for Artificial Life


If you enjoy creating artificial life, and want to write an academic paper about it (8 page, single-spaced limit, IEEE format), then you will want to submit your work to the Artificial Life 14 conference, being held Summer 2014 in NYC. Submission deadline (full papers) is March 31.

Topics include: bio-inspired robotics, cellular automata and artificial chemistries, synthetic life, embodied systems, collective behavioral dynamics, ecological/social/evolutionary dynamics, and the art and philosophy of Artificial Life. There is a separate call for workshops/tutorials (due January 15) and a Science Visualization competition (applications due February 1).

And, last but not least, some new Developmental Biology.....


Last but not least, here is a nice article by Carl Zimmer [11] summarizing the cutting-edge work being done on understanding the potential role of senescent cells in embryonic development. The excellent picture shows a mouse embryo (E15) with the areas of senescent cells stained in blue.

NOTES:

[1] Lenski's long-term evolution experiment was recently profiled in Science. Listen to this podcast for more: Crespi, S.   Podcast Interview: Richard Lenski. Science Express, November 14 (2013).

[2] Barrick, J.E. and Lenski, R.E.   Genome dynamics during experimental evolution. Nature Reviews Genetics, 14 827-839 (2013).

[3] Wiser, M.J., Ribeck, N., and Lenski, R.E.   Long-Term Dynamics of Adaptation in Asexual Populations. Science, DOI: 10.1126/science. 1243357.

[4] Postma, E., Visser, J., Van Noordwijk, A.J.   Strong artificial selection in the wild results in predicted small evolutionary change. Journal of Evolutionary Biology, 20, 1823–1832 (2007). 

[5] Careau, V., Wolak, M.E., Carter, P.A., and Garland, T.   Limits to Behavioral Evolution: the quantitative genetics of a complex trait under directional selection. Evolution, 67(11), 3102-3119 (2013).

[6] Barton, N. and Partridge, L.   Limits to natural selection. BioEssays, 22, 1075-1084 (2000).
For a short primer on the concept, please see this primer from Understanding Evolution.

[7] Lupski, J.R. et.al  One Human, Multiple Genomes: Genome Mosaicism. Science, 341, 358-359 (2013).

[8] McConnell, M.J. et.al   Mosaic Copy Number Variation in Human Neurons. Science, 342, 631-637 (2013).

[9] Kasowski, M. et.al  Extensive Variation in Chromatin States Across Humans. 750-752. Science, 342, 750 (2013).


[11] Zimmer, C.   Signs of Aging, Even in the Embryo. NYT Science, November 21 (2013).

November 19, 2013

Fireside Science: The Inefficiency (and Information Content) of Scientific Discovery

This content has been cross-posted to Fireside Science.


In this post, I will discuss somewhat of a trendy topic that needs further critical discussion. It combines a crisis in replicating experiments with the recognition that science is not an perfect or errorless pursuit. We start with a rather provocative article in the Economist called "Trouble at the Lab" [1]. The main idea: the practice of science needs serious reform in its practice, from standardization of experimental replicability to greater statistical rigor. 


While there are indeed perpetual challenges posed by the successful replication of experiments and finding the right statistical analysis for a given experimental design, most of the points in this article should be taken with a grain of salt. In fact, the conclusions seem to suggest that science should be run more like a business (GOAL: most efficient allocation of resources). This article suffers from many of the same issues as the Science article featured in my last Fireside Science post. Far from being an efficient process, the process of making scientific discoveries and discovering the secrets of nature require a very different set of ideals [2]. But don't just rely on my opinions. Here is a sampling of letters to the editor which followed:


The first is from Stuart Firestein, the author of "Ignorance: how it drives science", which is discussed in [2]. He argues that applying a statistician's theoretical standards to all forms of data is not realistic. While the portion of the original article [1] discussing problems with statistical analysis in most scientific papers is the strongest point made, it also rests on some controversial assumptions. 

The first involves a debate as to whether or not the Null Hypothesis Significance Test (NHST) is the best way to uncover significant relationships between variables. NHST is the use of t-tests and ANOVAs to determine significant differences between experimental conditions (e.g. treatment vs. no treatment). As an alternative, naive and other Bayesian methods have been proposed [3]. However, this still makes a number of assumptions about the scientific enterprise and process of experimentation to which we will return.


The second letter is refers to one's philosophy of science orientation. This gets a bit at the issue of scientific practice, and how the process of doing science may be misunderstood by a general audience. Interestingly, the notion of "trust, but verify" does not come from science at all, but from diplomacy/politics. Why this is assumed to also be the standard of science is odd.


The third letter will serve as a lead-in to the rest of this post. This letter suggests that the scientific method is simply not up to the task of dealing with highly complex systems and issues. The problem is one of public expectation, which I agree with in part. As experimental methods provide a way to rigorously examine hypothetical relationships between two variables, uncertainty may often swamp out that signal. While I think this aspect of the critique is a bit too pessimistic, let's keep these thoughts in mind.......

A reductionist tool in a complex world

Now let's turn to what an experiment uncovers with respect to the complex system you want to understand. While experiments have great potential for control, they are essentially hyper-reductionist in scope. When you consider that most experiments test the potential effect of one variable on another, an experiment may serve no less of a heuristic function than a simple mathematical model [4]. And yet in the popular mind, empiricism (e.g. data) tends to trump conjecture (e.g. theory) [5].

Figure 1. A hypothesis of the relationship between a single experiment and a search space (e.g. nature) that contains some phenomenon of interest.

Ideally, the goal of a single experiment is to reliably uncover some phenomenon in what is usually a very large discovery space. As we can see in Figure 1, a single experiment must be designed to overlap with the phenomenon. This can be very difficult to accomplish when the problem at hand is complex and multi-dimensional (HINT: most problems are). A single experiment is also a relatively information-poor way to conduct this investigation, as shown in Figure 2. Besides being a highly-controllable (or perhaps highly reduced complex) means to test hypotheses, an alternate way to think about experimental design is as an n-bit register [6].

Figure 2. A single experiment may be an elegant way to uncover the secrets of nature, but how much information does it actually contain?

Now to get an idea of how such overlap works in the context of replication, we can turn to the concept of an experimental footprint (Figure 3). Experimental footprints qualitatively describes what an experiment (or it's replication) uncovers relative to some phenomenon of interest. Let's take animal behavior as an example. There are many sources of variation that contribute to a specific behavior. In any one experiment, we can only observe some of the behavior, and even less of the underlying contributing factors and causes. 

A footprint is also useful in terms of describing two things we often do not think about. One is the presence of hidden variables in the data. Another is the effect of uncertainty. Both depend on the variables tested and problems chosen. But just because subatomic particles yield fewer surprises than human psychology does not necessarily mean that the Psychologist is less capable than the Physicist.

Figure 3. Experimental footprint of an original experiment and it's replication relative to a natural phenomenon.

The original maternal imprinting experiments conducted among geese by Konrad Lorenz serve as a good example. The original experiments were supposedly far messier [7] than the account presented in modern textbooks. What if we suddenly were to find out that replication of the original experimental template did not work in other animal species (or even among ducks anymore)? It suggests that we may need a new way to assess this (other than chalking it up to mere sloppiness).


So while lack of replication is a problem, the notion of a crisis is overblown. As we have seen in the last example, the notion of replicable results is an idealistic one. Perhaps instead of saying that the goal of experimental science is replication, we should consider a great experiment as one that reveals truths about nature. 

This may be best achieved not by the presence of homogeneity, but also a high degree of tolerance (or robustness) to changes in factors such as ecological validity. To assess the robustness of a given experiment and its replications (or variations), we can use information content to tell us whether or not a given set of non-replicable experiments actually yield information. This might be a happy medium between an anecdotal finding and a highly-repeatable experiment.


Figure 4. Is the goal of an experiment unfailingly successful replication, or a robust design that provides diverse information (e.g. successful replications, failures, and unexpected results) across replications?

Consider the case of an experimental paradigm that yields various types of results, such as the priming example from [1]. While priming is highly replicable under certain conditions (e.g. McGurk effect) [8], there is a complexity that requires taking the experimental footprint and systematic variation between experimental replications into account. 

This complexity can also be referred to as the error-tolerance of a given experiment. Generally speaking, the error tolerance of a given set of experiments is correspondingly higher as information content (related to variability) increases. So just because the replications do not pan out, they are nonetheless still informative. To maximize error-tolerance, the goal of an experiment should be an experiment with a small enough footprint to be predictive, but a large enough footprint to be informative. 

In this way, experimental replication would no longer be the ultimate goal. Instead, the goal would be to achieve a sort of meta-consistency. Meta-consistency could be assessed by both the robustness and statistical power of an experimental replication. And we would be able to sleep a little better at night knowing that the line between hyper-reductionism and fraudulent science has been softened while not sacrificing the rigors of the scientific method.

NOTES:

[1] Unreliable Research: trouble at the lab. Economist, October 19 (2013).

[2] Alicea, B.   Triangulating Scientific “Truths”: an ignorant perspective. Synthetic Daisies blog, December 5 (2012).

[3] Johnson, V.E.   Revised standards for statistical evidence. PNAS, doi: 10.1073/pnas.1313476110

[4] For more information, please see: Kaznatcheev, A.   Are all models wrong? Theory, Games, and Evolution Group blog, November 6 (2013).

[5] Note that the popular conception of what a theory is and what theories actually are (in scientific practice) constitutes two separate spheres of reality. Perhaps this is part of the reason for all the consternation.

[6] An n-bit register is a concept from computer science. In computer science, a register is a place to hold information during processing. In this case, processing is analogous to exploring the search space of nature. Experimental designs are thus representations of nature that enable this register.

For a more formal definition of a register, please see: Rouse, M.   What is a register? WhatIs.com (2005).

[7] This is a personal communication, as I cannot remember the original source. The larger point here, however, is that groundbreaking science is often a trial-and-error affair. For an example (and its critique), please see: Lehrer, J.   Trials and Errors: why science is failing us. Wired, December 16 (2011).

[8] For more on the complexity of psychological priming, please see: Van den Bussche, E., Van den Noortgate, W., and Reynvoet, B.   Mechanisms of masked priming: a meta-analysis. Psychological Bulletin, 135(3), 452-477 (2009).

November 11, 2013

New Work on CGSs: ritual and incorporative modeling

Here are two short features (something I am calling social media posters) that I debuted on Tumbld Thoughts. Think of this as posting entries from your laboratory notebook (one of several, in my case) to social media. In this case, each entry is further development of the Contextual Geometric Structures idea. The first poster is on the exaptation of rituals from a mathematical modeling perspective, using Halloween as a seasonally-appropriate example. The second poster is on something called incorporative modeling, using my "return from Robotistan" [1] as an example.

I. Seasonally-appropriate Cultural Exaptation


Why do rituals change over time? Why do they resemble weird things? And why do people believe even weirder things about these weird things? Here is a demonstration of something I am calling ritual modeling, using Halloween as an example. 

Some elements (particularly the forgetting/ decay measurements) of this are drawn from [2]. Additional process modeling and graph with pseudo-data were done to demonstrate this idea.



II. Returning from Robotistan


Here is a counterpart to my recent post on ritual modeling. To place this in context, here are a few readings on the highly predictive parts of human behavior. In [3], the analysis of easily- tracked human behaviors (such as mobility) can lead to highly predictable patterns. In [4], Sandy Pentland from MIT discusses how big data (databases of internet behavior) allow us to predict human behavior independently of morals and values. This has the effect of uncovering behaviors (using machine learning techniques) that people are usually not straightforward about in their language or public persona.

These types of analyses are useful both in terms of understanding aggregate cultural trends and the construction of crude behavioral models. So while such approaches are highly successful at characterizing well-known behaviors, they do not consider how new information is incorporated into behavioral schemes, especially those which are not highly predictive to begin with. This requires a more purely computational (e.g. simulation-based) approach.


To address this, I introduce something called an incorporative model, which draws from earlier work on Contextual Geometric Structures (CGSs). CGSs [2] are a hybrid soft classifier/fluid dynamics-inspired computational model of cultural behavior (e.g. culturally-conditioned collective behaviors). The slides show how observations are incorporated into such models, and how this new information is shared across a population of agents.





Before we end, the video for my lecture to the BEACON Center in May is now available on the BEACON Center YouTube channel. The second part discusses the application of CGSs to understanding the evolution and dynamics of economic value.

NOTES:

[1] For more musings from my trip, please see: Alicea, B.   Fear and Loathing in Robotistan. Synthetic Daisies blog, August 20 (2013).

[2] Alicea, B.   Contextual Geometric Structures: modeling the fundamental components of cultural behavior. Proceedings of Artificial Life, 13, 147-154 (2012).

[3] Song, C., Qu, Z., Blumm, N., Barabasi, A-L.   Limits of Predictability in Human Mobility. Science, 327, 1218-1021 (2010).

[4] Pentland, S.   Predicting Customers' (Unedited) Behavior. Harvard Business Review, September 19 (2012).

November 4, 2013

From Cycles to Giant Components, a Socially-guided Tour

Here are a few thematic features cross-posted to Tumbld Thoughts. You will discover the theme as you read -- it will "emerge", shall we say. But I'm not promising deep causality. 

I. Cycles of Social Events With Little Causality?


Here are some random readings on cliodynamics and why my blogging endeavors exhibit little causality. The first set of articles [1, 2] focuses on blog mining, particularly when blog posts on a given topic yields subtle causality. In [2], the Rapport Corpus was used to compile thousands of (qualitative) accounts of the same event. These data were then statistically mined to find causal mechanisms among the convergent threads. This is (in theory) similar to the mining of lung cancer data for potential (and oftentimes false-positive) causal patterns.


Another way to establish historical causality among what are often highly-qualitative and contextually-contingent accounts of observed events is to use cliodynamics [3]. Cliodynamics uses a chartist approach, which is similar to Forex trading strategies [4]. This might be useful for finding cycles of violence in historical data. However, Jason Collins [5] offers a critical analysis of Turchin's approach. Notably, Turchin [4] boils most of history down to two uniform cycles: secular (in which societies cycle from egalitarian to elitist to egalitarian in 200-300 years) and father-and-son (where social injustices are found and addressed in 60-80 year cycles). However, this does not account for large-scales changes (so-called Black Swan events) nor other complex historical contingencies.


II. Or is there more causality than suspected?


Here are two perspectives on the Nobel Syndrome: does winning a Nobel cause brilliant minds to start investigating weird things, or does it happen all on its own? In the first feature by Bradley Voytek at Oscillatory Thoughts [6], we are introduced to the prodigy effect, where young investigators win Nobels (or similar such prizes), and then go on to investigate pseudo-scientific phenomenon later in their career [7].


But does the proverbial cart (Nobel) always come before the horse (oddball research topics)? That's where the second article (by Barry Ritholz at The Big Picture blog) comes into play: is Eugene Fama (this year's winner of the Economics Nobel) an example of someone who engaged in oddball behavior before winning the prize [8]? Ritholz thinks so, and explains why that may be a pre-emptive case of Nobel syndrome.

III. Or perhaps hierarchical network effects?


Here are a few blog posts/articles on human organization, cities, and economic payoff. The first is an intellectual excursion from Dizzynomics blog [9] on the phenomenon of buying housing as investment income in central London. This has lead to massive increases in housing prices which has displaced former residents to less desirable areas. The consequence of this strategy might be to create a ghost city (a city with no permanent residents) or, worse yet when the bubble bursts, a dead city. But what happens when a few cities (such as central London) serve as critical access points for the global economy? This trend, replicated across other cities in the global urban network, may provide a subtle causal mechanism for significant income inequality. This outsized effect (in terms of scope) of a real estate arms race on overall economic opportunity is discussed in a post [10] from Moneybox blog.


But why is it, in the age of easy global travel and internet connectivity, that opportunity found in the critical access point cities has not decentralized to a large number of urban centers? The answer to this is partly due to the inherent relationship between a given city's creative performance and its population size. This has been articulated by Geoffrey West and others [11]. According to this idea, the largest cities should be the most economically (and creatively) productive. This scaling relationship can occasionally be violated, but such exceptions are directly dependent on the evolution of the city in question. 


But perhaps the nature of extreme concentration (or strict hierarchical organization) of places that are true engines of economic wealth creation have as much to do with the network topology that connects players in the global economy than the inherent properties of those players. In an network analysis of Twitter messages involving two grass-roots political organizations (the Tea Party and the Occupy), different network topologies might lead to different outcomes and sets of constraints on its function [12]. Perhaps the selective nature of a hyper-efficient, free-market global economy naturally leads to hyper-centralization and limited economic flexibility.


IV. Yes, Probably, with a Chance of Giant Components



It's hard to influence the "Giant Component". To make this point, here is an interesting book review: Robin Hanson of Overcoming Bias reviewing David Graeber's book "Debt: the first 5000 years" [13]. It is interesting not because of any particular insight or its length, but because here we have dyed-in-the-wool market capitalist reads book by a self-avowed left-wing Anarchist. And surprising because, overall, Hansen actually liked the book. I have also read "Debt", and understand Hansen's skepticism. However, there are two tacit assumptions to this dynamic that need to be understood:

1) Graeber is in an interesting position because while he is an Economic Anthropologist, he is also an activist. Therefore, the scholarship and plans for action don't always match up (as they should not). But then again, why does theory (or in this case, comparative historiography) need to be a catalyst for social change? Just because a particular theory fails to do so does not diminish what theories are actually for (e.g. explaining and predicting) [14]. And whether a particular set of theoretical assumptions actually does this is not a matter of a lack of activism.

2) The implicit goal of economics is to understand how resources are efficiently allocated. In fact, the definition of the suffix "-nomics" (or even "-omics") means "natural law", but is often used as a stand-in for quantification and optimization [15]. In fact, a goal of economics is to understand human exchange through the lens of optimal outcomes (whether or not they actually are optimal). By contrast, alternative approaches such as Economic Anthropology do not make this assumption. Such alternative approaches There is a "economics as natural law" vs. "economics as human agency" [16] dichotomy surfacing here that subtly influences much of the debate on post-crisis economics.


Now, one of Hanson's criticisms is that Graeber is inherently "anti-debt". And while Hanson does not explain why debt is a good thing (other than totally ignoring the phenomenon of predatory debt), Graeber does discuss how debt is part of a system of social and moral obligations. In this sense, debt enables a social order. However, these social orders can be unstable (due to natural disasters or wars), and it has been quite common throughout history to discharge debts. This is where Hanson has the most trouble with Graeber's position: what would happen to the world economy if debts were simply discharged? Would this not be cataclysmic? And who pays the price when debt holders are not just elites, but pension funds and endowments as well?

But this brings up a larger question: how does large-scale cultural change happen during the flow of life, and how does it happen without social collapse or (more immediately) a fundamental disruption to social life? We can view this in the context of social networks -- more importantly in terms of Renyi's Giant Component [17]. Social networks (in this case, the global economy) exhibit connectivity as a function of human exchange. In the case of modern economic social networks, a common system of finance permeates every part of the topology. This is why the financial crisis of 2008 had such an "giant" effect: the freezing of credit systems essentially had the effect of neutralizing connections throughout the network.  

Setting everything back to zero can be quite destructive. Or creatively destructive.....

The giant component, or at least one interpretation, results from a phase transition in the network structure that results in a large, unified topological component. This giant component, once it emerges, is stable. But it may also be unevolvable (e.g. serves as a cultural constraint) and perhaps even makes the entire network brittle with respect to large-scale changes [18]. It is because of this giant component that large-scale social change provides as much a risk as an opportunity: simply suspending or changing a policy or arrangement that has lead to a giant component (or of similar scale) has the potential to completely dissolve the network.



NOTES:

[1] Blog mining. Economist, March 11 (2010).

[2] Tomai, E., Thapa, L., Gordon, A.S., and Kang, S-H.   Causality in Hundreds of Narratives of the Same Events. Proceedings of the AAAI (2011).

[3] Turchin, P.   Arise 'cliodynamics'. Nature, 454, 34-35 (2008).

[4] For some perspectives on Turchin's work, please see:

a) Pigliucci, M.   Cliodynamics, a science of history? Rationally Speaking blog, August 4 (2008).

b) Finley, K.   Mathematicians Predict the Future With Data From the Past. Wired Enterprise, April 10 (2013).

[5] Collins, J.   Cliodynamics and complexity. Evolving Economics blog, August 6 (2012).

[6] Voytek, B.   The Prodigy Effect. Oscillatory Thoughts, June 8 (2013).

[7] Orac   Luc Montagnier: the Nobel disease strikes again. Respectful Insolance blog, November 23 (2010). Also, here is a Quora conversation in the topic. 

[8] Ritholz, B.   Fama has Shiller to thank for his Nobel Prize. Big Picture blog, October 20 (2013). 

[9] Kaminska, I.   Property bubbles and ghost cities. Dizzynomics, October 9 (2013) AND Goldfarb, M. London's Great Exodus. October 12 (2013). 

[10] Yglesias, M.   America's fast-growing cities aren't prospering. Moneybox blog, September 30 (2013).

[11] Robinson, R.   Can cities break Geoffrey West’s laws of urban scaling? The Urban Technologist blog, July 23 (2013).

Bettencourt, L.M., Lobo, J., Strumsky, D., and West, G.B.   Urban scaling and its deviations: revealing the structure of wealth, innovation and crime across cities. PLoS One, 5(11), e13541 (2010).

[12] Whitty, J.   Tweet Forensics: occupy vs. tea party. Mother Jones, November 17 (2011).

[13] Hanson, R.   Graeber's Debt book. Overcoming Bias blog, October 6 (2013).

[14] Johnson, T.   How economics suffers from de-politicised mathematics. Magic, Maths, and Money blog, September 21 (2013).

[15] One man's quest to make "omics" all about his life (and biology): Dennis, C.   The rise of the narciss-ome. Nature News, March 16 (2012).

[16] The performativity hypothesis, summarized in the aptly-named book: MacKensie, D.   An engine, not a camera: how financial models shape markets. MIT Press (2008).

[17] Erdos-Renyi model: Erdos, P. and Renyi, A.  On the evolution of random graphs. Publications of the Mathematical Institute of the Hungarian Academy of Sciences, 5, 17–61 (1960).


Hayes, B.   The birth of the giant component. bit-player blog, November 20 (2009).

[18] Jones, J.H.   Nearly Neutral Networks and Holey Adaptive Landscapes. Monkey's Uncle blog, December 29 (2008).

Printfriendly