Notice that I could have used the word "theory", but it actually has more to do with the cultural premises of one's discipline and formal training [1], especially in cases where there is a lack of good theory.
These slides are the second part of a talk of mine called "If your results are unpredictable, does it make them any less true? (posted to Figshare), which is a follow-up on the HTDE 2012 Workshop.
This set of slides was inspired by an in-lab discussion about a news article, that lead me to a recent PNAS paper on sepsis research in mice and humans. While mice are the accepted model organism for studying sepsis [2], it turns out that the physiological response (e.g. microarray studies and gene expression correlations) to sepsis in humans is very different than in mice.
This result is interesting from an evolutionary standpoint. While there is phylogenetic distance between mice and humans, they are both mammals and certainly share many physiological and genomic characteristics. Furthermore, can these differences be explained using evolutionary theory? Has there been evolution in the sepsis response between mice and humans, or are these differences due to a highly variable response that can vary widely between species (and perhaps even between individuals in the same species)?
The variation in pathway activation and physiological responses seems to be quite common in medical research. When a certain experimental manipulation is done to multiple species [3], there is a range of possible outcomes, from a common response to a widely varying responses. We will return to this later.
For now, let's consider why such massive differences might exist between humans and mice for a single physiological response. This is where we must return to the issue of premises. Given your background and preferences, you might choose a single explanatory framework.
I have presented three in the slide below: black box, complexity, and noise. Each of these may find support depending on the measures used and physiological components assayed. Yet each of them used in isolation may not be particularly satisfying, nor even explain very much of the data by themselves. This is why good, unified theories are of such value.
Another important aspect of understanding this variable response is to rule out alternative hypotheses. In the slide below, I consider three potential artifacts that could unduly influence the animal model results: standardization of environmental conditions, artificial selection on the model organism population due to selective breeding, and the tendency of the experimenter to put more weight on features of the experimental design or analysis that allow for greater experimental replication within a particular species. Particularly in the case of the first and last point, the lesson is that standardization of the experimental setting may actually do more harm than good and introduce ecological validity problems.
Now I present my interpretation of what is going on with the sepsis result. This consists of two hypotheses that can be applied to each species (human and mouse). The first is that the physiological response to sepsis is exact, which utilizes the same pathways and same patterns of gene expression across most conspecifics but only within a single species. This might require mutational distance and other evolutionary changes among the genes that explain the sepsis phenotype.
The alternate hypothesis says that the physiological response to sepsis is variational, which means that there is potentially great variation is mechanism across most members of the same species. This variation need not be due to heritable mutation, but simply a lack of specificity in the molecular pathways and other associated mechanisms. In this case, there would be differences between human and mouse far greater than a consensus phylogeny might suggest.
What is a variational response? The term "variational" [4] is taken (perhaps loosely) from the mathematics and physics literature, and is generally used to describe a system with many potential solutions. In this context, the goal of the variational method is to approximate potential solutions based on optimizing their properties.
One example can be shown in the slide below: two alternate routes from Toronto to Vancouver. Each route is the "shortest" route using two pathway criterion. One pathway is tightly restricted to the Trans-Canadian highway, while the other allows for an alternate route along a number of US interstates (e.g. 5, 90, 94). Both routes are about the same number of kilometers in length (e.g. number of steps in a physiological pathway). Yet they might be alternately used due to the in-capacitation of one pathway or the other [5].
The slide below shows these hypotheses in a phylogenetic context. As a contingency table, we consider the exact and variational scenarios for both conserved and divergent mechanisms. In the case of a conserved mechanism, there is very little mutational change to the underlying genes or pathway. For a divergent mechanism, the opposite is true.
To further understand what is meant by evolutionary conservation (and how it affects the consistency of physiological responses across species), I will now discuss two examples from the literature: the regulation of stress and aging, and the use of zebrafish as a human analogue. This will hopefully put my evolutionary speculations in context.
In aging research, phylogenetically-divergent species such as yeast and flatworms are used to understand the substrate of interventions such as caloric restriction and the activity of pathways related to stress resistance. As Longo and Fabrizio [6] demonstrate using aggregated data (see below slide), the associated pathway architectures are quite invariant across yeast, flatworms, and humans. However, this may not involve the same genes form species to species. In cases where conserved genes are known to be involved, it is not clear whether this conservation of mechanism components extends to a conserved mechanism itself.
A recent set of papers [7] focuses on comparing the genomes and proteomes of zebrafish with humans. As zebrafish and humans diverged around 440 million years ago [8], we would expect there to be vast differences in both function and genomics. However, there are occasionally greater differences among zebrafish than between zebrafish and humans. Another puzzle similar to the sepsis story, excpet that now we have extensive characterization of the genome and proteome to work from.
In the slide below (taken from Figure 3 of the Howe et.al paper), we can see how orthologues are shared by zebrafish and human as well as the relationship between so-called ohnologues in the zebrafish genome. Data such as these may provide good future estimates on how and why differences exist when evaluating variation related to basic physiological functions with and between zebrafish and humans.
So what can be learn from the big picture? Particularly when distinguishing between the homogeneity expected from experimental replication and the heterogeneity posed by natural variation? Perhaps we can treat experimental replication as a generative model, where the basic experiment is expected to reveal a range of likely outcomes. Like generative models in machine learning, the goal of analysis is to pick the best model (or in this case, the set of data that provide the closest match to what we know about the underlying natural phenomenon).
This is a tricky proposition, because both the possible set of experimental and natural outcomes are incompletely known. Nevertheless, as in the case of understanding physiological processes and outcomes as variational processes, we can make good approximations that provide high explanatory power [9] without over-relying on the replication of results.
NOTES:
[1] cultural premises are also known as "point of view". See this Tumbld Thoughts post for a detailed review.
From two wildly different premises: a painting entitled "Picasso and Dali Paint an Egg" (Artist unknown).
[2] model organisms are used conduct experiments that are either unethical or impossible to engage in with human subjects. Here is the full NIH list of model organisms. The accepted human analogues range from fruit flies (Drosophila) to mice (Mus musculus) and round worms (C. elegans) and zebrafish (Danio rerio). A newer trend is to use domesticated animals (e.g. sheep, cows, goats, pigs) as (non-traditional) model organisms.
Please see the following papers for more information on cross-species comparisons of model organisms with relevance to disease:
Golstein, P., Aubry, L., and Levraud, J.P. Cell-death alternative model organisms: why and which? Nature Reviews Molecular and Cell Biology, 4(10), 798-807 (2003).
Goldstein, P. Cell death in unusual but informative and beautiful model organisms. Seminars in Cancer Biology, 17(2), 91–93 (2007).
[3] this effect can be observed (usually understood via anecdotal reporting methods) both in vivo and in vitro (cell culture models).
[4] the variational principle is widely used in quantum physics and engineering to arrive at solutions in very large, complex systems. Why not a version of this idea for physiological systems analysis?
[5] this suggests a role for mechanisms such as robustness, evolvability, and degeneracy.
[6] Longo, V.D. and Fabrizio, P. Regulation of longevity and stress resistance: a molecular strategy conserved from yeast to humans? CMLS: Cellular and molecular life sciences, 59(6), 903-908 (2002).
[7] Here are a host of relevant papers (including a recent feature article in Nature):
a. Varshney, G.K. et.al A large-scale zebrafish gene knockout resource for the genome-wide study of gene function. Genome Research, 23, 727-735 (2013).
b. Kettleborough, R.N.W. et.al A systematic genome-wide analysis of zebrafish protein-coding gene function. Nature, doi:10.1038/nature11992 (2013).
c. Schier, A.F. Zebrafish earns its stripes. Nature, doi:10.1038/nature12094 (2013).
d. Howe, K. The zebrafish reference genome sequence and its relationship to the human genome. Nature, doi:10.1038/nature12111 (2013).
e. Barbazuk, W.B. The Syntenic Relationship of the Zebrafish and Human Genomes. Genome Research, 10, 1351-1358 (2000).
[8] data derived from multiple consensus phylogenies (a meta-meta analysis) curated by Timetree.org.
[9] This was cross-posted to my micro-blog, Tumbld Thoughts:
A new paper by Button et.al [a] featured in Wired Science claims that research in the Neurosciences are plagued with low statistical power (e.g. explanatory capacity of significant results), which is based on an 2005 paper by John Ioannidis [b] that applies a measure called positive predictive value (PPV) for determining the reliability of results in a particular scientific field (top image). While Ioannidis originally focused on results in Psychology, in later papers he has extended this line of inquiry to Computational Biology (e.g. microarray analysis) [c].
This reliability can be compromised by something called the proteus phenomenon [d], which deals with drawing a consensus from a series of datasets that exhibit similar biases. Two potential examples of this can be seen in a meta-meta analysis of the Psychological literature (Figure 3) from [a], and the Social Psychology literature. In the case of the latter, a paper from Vul et.al [e] investigates the exceedingly high correlations between brain activity data yielded from neuroimaging data and personality (e.g. self-reported) measures. Does this mean that there truly IS a high correlation, or is a subtle bias at work here?
Whether or not these concerns are overblown is up for debate, and it may be an artifact of the way we test for significance (e.g. NHST) rather than inherent problems with the method of experimental replication [f]. Fortunately, people are trying to address some of these issues (bottom image). Examples include the Equator Network [g] and the reproducibility project [h], both of which advocate open science. And, of course, there are more philosophically-oriented issues that I have started to address with the Hard-to Define Events (HTDE) approach.
For Reference 9, see also:
[a] Button, K.S., Ioannidis, J.P.A., Mokrysz, C., Nosek, B.A., Flint, J., Robinson, E.S.J., and Munafo, M.R. Power failure: why small sample size undermines the reliability of neuroscience. Nature Reviews Neuroscience, doi:10.1038/nrn3475 (2013).
[b] Ioannidis, J. P. Why most published research findings are false. PLoS Medicine, 2, e124 (2005).
[c] Ioannidis, J. P. et.al Repeatability of published microarray gene expression analyses. Nature Genetics, 41, 149–155 (2009).
[d] Pfeiffer, T., Bertram, L. & Ioannidis, J. P. Quantifying selective reporting and the Proteus phenomenon for multiple datasets with similar bias. PLoS ONE 6, e18362 (2011).
"The chances for non-significant studies going in the same direction as the initial result are estimated to be lower than the chances for non-significant studies opposing the initial result"
[e] Vul, E., Harris, C., Winkielman, P., and Pashler, H. Puzzlingly High Correlations in fMRI Studies of Emotion, Personality, and Social Cognition. Perspectives on Psychological Science, 4, 274 (2009). Also see Ed Vul's site on "voodoo correlations".
[f] for more information on the BEST test (an alternative to tests of the null hypothesis), please see: Kruschke, J.K. Bayesian estimation supersedes the t test. Journal of Experimental Psychology: (2012).
[g] Simera, I. et.al Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Medicine 8, 24 (2010).
Other (more specialized) consortia geared toward this end include the Consolidated Standards of Reporting Trials (CONSORT), the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), and the Collaborative Approach to Meta-Analysis and Review of Animal Data in Experimental Stroke (CAMARADES).
[h] Open-Science-Collaboration. An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives in Psychological Science, 7, 657–660 (2012).
No comments:
Post a Comment