February 25, 2013

Innovation-palooza for February

Here are a couple more cross-posts (on Hard-to Define Events and Scientific Innovation, respectively) from my micro-blog, Tumbld Thoughts.






A. Take a look at the link [1] to a slideshow I am presenting in the near future as a follow-up to the Hard-to-define Events workshop I organized last summer. It is called "If your results are unpredictable, does it make them any less true?", and focuses on applying the hard-to-define events paradigm to biology and the development of scientific theory.


B. And here is an article from Joe Nocera of the NYT called "Innovation Nation at War" [2]. It is a critical assessment of the current patent system and how it is hurting the innovation economy. Specifically, he discusses patents in terms of their economic utility, and how companies buy patent portfolios and file patent lawsuits [3] in a manner that severely violates this principle. Perhaps a better solution would be a bond system ties to ideas futures, which would provide a more immediate payoff with less legal jujitsu [4].





C. As an added bonus, here is a podcast (courtesy of the NanoNerds YouTube channel) with Brian Bergstein from MIT Technology Review on the top ten emerging technologies of 2012. The continued development of cheap nanopore DNA sequencing technology is my favorite.

NOTES:


[2] here is a real-time update of the "Tweetscape" for this article. Interesting comments.

[3] A conflict termed the "patent wars". Interesting infographic here.

[4] The martial arts is figurative, but the legal maneuvering is quite real (see here for an example on the anti-commons). See diagrams above for more information. These are some ideas I have been developing, and stand in comparison to intellectual property-oriented patents.



February 19, 2013

Ratchets in Nature: a short review

A natural ratchet (similar to a simple machine-style ratchet, see Figure 1) can be defined as a mechanism that operates with one degree of freedom (forward and backward along an axis), but is constrained to exploit movement in only one direction (forward movement). This allows for nominally random processes to become directed without an external guide.

Ratchet mechanisms have been proposed for development, complexity, evolution, biophysics, and cognitive performance. Table 1 profiles nine different types of ratchet found in nature. There are likely many more natural processes with similar behavior which are not included here.

Figure 1. An animation of a ratchet mechanism. COURTESY: Wikipedia.

To illustrate how the ratchet metaphor has been applied in science, and to show the parallels between the various conceptual models, we will start with evolutionary ratchets (related to genetic background) and then move on to other types of natural ratchet (which are more directly related to energetics and movement).

Table 1. Different types of ratchet in nature (click to enlarge).
1 instance of a Brownian ratchet (example from actin dynamics, see [1]).

Evolutionary ratchets map the degree of movement freedom in a mechanical ratchet to increases and decreases in mutability and/or fitness (e.g. axes of variation). These fluctuations are then constrained to net increases in these variables.

To explain apparent directionality in evolution, three ratchet-like mechanisms (type I) have been explained: the epistatic, Muller’s, and cognitive ratchets. The epistatic ratchet is driven by epistatic interactions, so that mutations key to defining functional changes depend on corresponding mutations in genes of smaller effect [2]. For Muller’s ratchet, high mutation rates and neutral processes work together to produce a bias towards genotypes with a greater number of mutations [3].

In the case of both epistatic and Muller’s ratchets, mutational change becomes directional [4] rather than a random walk [4.1]. While this type of mutational change appears at first glance to be directed in some top-down manner, Maynard-Smith and Szathmary [4.2] show that it can be spontaneous by proposing a ratchet-like process called contingent irreversibility. In a similar but goal-directed manner, the cognitive ratchet works at the scale of behavior, and relies on anticipatory (behavioral) abilities for certain behaviors to drive further evolution of these abilities over longer time-scales [5].

Thermal ratchets (type II) are an instance of a Brownian ratchet, which are driven by Brownian (or quasi-random) noise [6]. In Huxley and Simmons [7] describe how thermal fluctuations allow for myosin heads to bind actin filaments under strain. This configuration generates tension and ultimately movement (e.g. kinesis). These observations were incorporated into the dominant theory of how muscle fibers contract (e.g. sliding filament theory – see Figure 2). While other types of natural ratchet are largely metaphorical, the physics of type II natural ratchets are the most similar to the physics of a simple machine-style ratchet.

Figure 2. A visual description of sliding filament theory (e.g. a thermal ratchet that drives contraction). COURTESY: Engrade.

The next two types of ratchet (type III) involve the transmission and expression of genetic material, respectively. In the case of a gene transfer ratchet, the ratchet mechanism is used to explain the acquisition of bacterial genes by Eukaryotes [8]. Theoretically, it is suggested that early Eukaryotes acquired bacterial genomes from the bacteria consumed as food. Enhancer action ratchets push a genome's regulatory machinery towards bistability for chromatin markers of specific regulatory regions, which further enable changes in gene expression during a biological process [9].

The last three types of ratchet (type IV) have relevance to cellular and organismal movement. The directed tissue movement ratchet is driven by selectively-times pulsed forces that occur during tissue differentiation and organization in development [10]. The filament-pulling (diffusive) ratchet relies on a protein’s interactions with the geometry of cell bodies to enable functions such as DNA segregation during mitosis [11]. Figure 3 demonstrates this in an ex vivo setting. Finally, bacterial motors are ratchet-like mechanisms used to enable bacterial motility, usually through a flagellar structure [12].

But what enables directional behavior what are essentially undirected processes? In some cases (such as epistatic effects or Brownian ratcheting), a mechanism called stochastic resonance might be responsible [13]. Stochastic resonance (see Figure 4) involves the emergence of order by adding a noisy signal to a process which is already embedded in noise. This provides a route to order from chaos without any sort of intelligence.

And while there are no hard and fast rules to being noisy, the basic idea is that randomness can help define (or at least constrain) the system of interest [14]. A second possible mechanism involves a source of information acting as driver of the ratcheting mechanism. This is, in principle at least, similar to the Szilard engine that is supposed to provide directionality to entropic processes [15].

Figure 3. Instances of geometry sensing and the filament-pulling (diffusive) ratchet. TOP: diffusive coupling at different constants (from Supplementary Movie 8 in [11]). BOTTOM: Min protein on various membrane geometries (from Supplementary Movie 6 in [11]).

Figure 4. An example of stochastic noise (and resonance) in visual stimuli. Add white noise to the image, and it contributes to resolution of the underlying pattern. COURTESY: Wikipedia.

Full Citations and Notes:

[1] Carlsson, A.E.  Actin dynamics: from nanoscale to microscale. Annual Reviews in Biophysics, 39, 91-110 (2010).

For a more general review, please see this paper: Reimann, P. and Hanggi, P.   Introduction to the physics of Brownian motors. Applied Physics A, 75, 169-178 (2002).

[2] Bridgham, J.T., Ortlund, E.A., and Thornton, J.W.  An epistatic ratchet constrains the direction of glucocorticoid receptor evolution. Nature, 461, 515-519 (2009). See also the related concept of constructive neutral evolution (CNE):

[2.1] Lukes, J., Archibald, J.M.,  Keeling, P.J., Doolittle, W.F., and Gray, M.W.   How a neutral evolutionary ratchet can build cellular complexity. IUBMB Life, 63(7), 528-537 (2011).

[2.2] Atkins, A.R. and Lambowitz, A.M.   A protein required for splicing group I introns in Neurospora mitochondria is mitochondrial tyrosyl-tRNA synthetase or a derivative thereof. Cell, 50, 331–345 (1987).

[2.3] Stoltzfus, A  On the possibility of constructive neutral evolution. Journal of Molecular Evolution, 49(2), 169-181 (1999).

[3] Muller, H.J.  The relation of recombination to mutational advance. Mutation Research, 1(1), 2–9 (1964).

[4] Evolutinary ratchet mechanisms are closely tied to random and/or neutral processes, as demonstrated in these two references:

[4.1] Codling, E.A., Plank, M.J., and Benhamou, S.  Random walk models in biology. Journal of the Royal Society Interface, 5(25), 813-834 (2008).

[4.2] Maynard Smith, J. and Szathmary, E.   The Major Transitions in Evolution. Oxford University Press, Oxford UK (1995).

[5] Riegler, A.  The Cognitive Ratchet: the ratchet effect as a fundamental principle in evolution and cognition. Cybernetics and Systems, 32, 411–427 (2001).

[6] Longtin, A.  Stochastic dynamical systems. Scholarpedia, 5(4), 1619 (2010).

[7] Huxley, A.F. and Simmons, R.M.  Proposed Mechanism of Force Generation in Striated Muscle. Nature, 233, 533-538 (1971).

[8] Doolittle, W.F.  You are what you eat: a gene transfer ratchet could account for bacterial genes in eukaryotic nuclear genomes. Trends in Genetics, 14, 307-311 (1998).

[9] Narula, J., Smith, A.M., Gottgens, B., and Igoshin, O.A.  Modeling Reveals Bistability and Low-Pass Filtering in the Network Module Determining Blood Stem Cell Fate. PLoS Computational Biology, 6(5), e1000771 (2010).

[10] Solon, J., Kaya-Copur, A., Colombelli, J., and Brunner, D.  Pulsed Forces Timed by a Ratchet-like Mechanism Drive Directed Tissue Movement during Dorsal Closure. Cell, 137, 1331-1342 (2009).

[11] Schweizer, J., Loose, M., Bonny, M., Kruse, K., Monch, I., and Schwille, P.  Geometry sensing by self-organized protein patterns. PNAS, 109(38), 15283-15288 (2012).

[12] Di Leonardo, R., Angelani, L., Dell’Arciprete, D., Ruocco, G., Iebba, V. Schippa, S., Conte, M.P., 
Mecarini, F., De Angelis, F., and Di Fabrizio, E.  Bacterial ratchet motors.PNAS, 107, 9541-9545 (2010).

[13] Rouvas-Nicolis, C. and Nicolis, G.  Stochastic resonance. Scholarpedia, 2(11), 474 (2007).

[14] Wimsatt, W.  Randomness and perceived randomness in biological systems. Synthese, 43, 287-329 (1980). 

Interesting linkages between randomness (in the algorithmic sense) and evolutionary biology from a biologist's point of view.

[15] Parrondo, J.M.R.  The Szilard engine revisited: entropy, macroscopic randomness, and symmetry breaking phase transitions. Chaos, 11(3), 725-733 (2001).

Also see this Cosmic Variance blog post on the connection between information and thermodynamic entropy: Carroll, S.  Using Information to Extract Energy. Cosmic Variance blog. November 22 (2010).

February 11, 2013

A sparse, anti-fragile universe.....

This article is being cross-posted [1] from my micro-blog, Tumbld Thoughts.

Not a particularly sparse universe, by the way. But a home of sparse variables? Perhaps.

And now, dispatches from an antifragile universe [2]. Nassim Nicholas Taleb has a guest article in Wired (entitled "Beware the Big Errors of Big Data") in which he warns against the use of sparse variables [3] in so-called high-throughput datasets.



NOTES:
[1] also re-posted to the Mathematica group on Tumblr.

[2] N.N. Taleb  "Antifragile: things that gain from disorder". RSA lecture here. I've yet to decide whether Taleb is the Rasputin or the Tesla of analytics. Regardless, here is a video from the New York Public Library speaker series featuring Taleb discussing the concept of anti-fragility with cognitive psychologist Daniel Kahneman.


[3] variables that have a high signal-to-noise ratio. Or, why sometimes Watson gives ridiculous answers. Generally speaking, the more variables you have in an analysis, the greater the number of spurious correlations you must contend with. This is quite similar to Bellman's "curse of dimensionality".

February 9, 2013

Notes From the TSP Revolution

This is being cross-posted from my micro-blog, Tumbld Thoughts.


Here is a link [1] to the latest research on the traveling salesman problem (TSP). The TSP [2] involves visiting a set of cities using the shortest path possible while visiting each city only once (e.g. finding the shortest route). Recent research has demonstrated that by using the Christofides' algorithm, good approximations to an solution (at most 50% of the true shortest route) are possible [3]. Solutions to the TSP can be applied to a wide range of problems, including logistics, phylogenetic analysis, and power grid optimization.



NOTES:
[1] New directions in TSP research (website hosted by Georgia Tech). The first picture shows a TSP tour on a set of points resembling "Mona Lisa". The second picture (map of the United States) is representative of a more typical TSP problem (very large set size).

[2] The TSP is an NP-hard graph optimization problem, which requires innovative algorithmic solutions such as the Christofides algorithm or ant colony optimization (ACO). An example from the ACO literature:

Dorigo, M. and Stutzle, T.   Ant Colony Optimization: Overview and Recent Advances. In "Handbook of Metaheuristics", M. Gendreau and J.-Y. Potvin eds, International Series in Operations Research and Management Science, Springer, Berlin (2010) Chapter 8.

In a response to this post, I received this response from Minus-Five:
"Interesting article, but Christofides’ algorithm (or its analysis) is not a novelty: What’s actually new here is the discovery that it can be improved, if only by an epsilon (so far)". 
[3] Klarreich, E.   Computer Scientists Find New Shortcuts for Infamous Traveling Salesman Problem. Wired Science, January 30 (2013).

February 5, 2013

Collective Properties of Cellular Identity, new arXiv paper

Here's a paper of mine recently accepted to the arXiv, cross listed under the q-bio.GN and q-bio.QM categories:


Collective Properties of Cellular Identity: a computational approach

This is independent work I did while in the Cellular Reprogramming Laboratory (and the first paper under the Orthogonal Research affiliation). The data consist of secondary analysis (not quite a formal meta-analysis, but similar) conducted on 45 microarray experiments. In doing so, I employ a unique set of analytical techniques (e.g. soft classification) and evaluate the general tendencies and patterns that constitute cellular type. For example, what constitute differences and commonalities among a range of pluripotent (e.g. stem-like) cell type? Some of the analyses were done using MATLAB, the code for which (disclaimer: use at your own disappointment) can be found at Github. See the cartoon graphic below and full paper for more information.


FYI, Nature News recently ran a feature on the potential of arXiv for the publication of results in the genetics and biology communities. arXiv: no longer just for physicists anymore!



February 1, 2013

Projective Models: a new explanatory paradigm


Predictive models are well-known, and have been deployed in a number of high-profile technologies. From IBM’s Watson [1] to the development of autonomous aircraft [2], predictive models use a statistical basis for making inferences about future events. Theoretical models of the brain suggest that the brain is a prediction machine, inferring the world from past events [3]. Yet how intelligent can a statistical model be, especially with regard to complex events?  What is needed (and what already exists in the world of futurism) is something called a projective model. Projective models are predictions about the future, but not solely based on past events. Unlike predictive models, projective models are largely based on mental models. They might also be called "blue sky" models [4]. However, they share attributes that might help scientists and futurists build better models of future events.

The success of projective models depends on a tension between uniformity and empirically-driven outliers. Every January 1, lists of predictions for the upcoming year are unveiled. There are elements of uniformity in that historical trends are continued. However, improper predictions of economic collapse and failure to predict deaths and accidents demonstrate the difficulties of properly incorporating outliers. While outlier incorporation is a feature of predictive models, they are a particularly important component of projective models, and a critical component of getting projective statements correct.

I lived in this year. I'm not sure this came to fruition.

But, this could be the way the world works in the 25th century.

In order to make a coherent statement about the future, projective models (as well as predictive models) must assume that trends represent some kind of norm. In projective models, normative parameters are not based on averaging, but on popular heuristics for making a projection. The three most common of these are: 1) assume things stay the same, 2) catastrophe (or comeuppance) is coming soon, and 3) things will decrease/increase linearly. Much like curve-fitting exercises that are part of predictive modeling, projective model heuristics make assumptions about the behavior of the system in question. Likewise, the existence of inherently unpredictable events (such as black swans – [5]) affects projective and predictive models alike.

Three different scenarios for projective models: TOP: things stay the same, MIDDLE: constant increase or decrease over time, BOTTOM: big events change everything.

So the first phenomenon that drives reality away from predictions involve data-driven trends from past that do not necessarily lead to trends in the future. We cannot truly know what the future holds, and so must use extrapolation to make such statements. In the case of economic or other model-based projections (generated using traditional predictive model methodologies), improper assumptions can lead to deceptive results [6]. Sometimes, these extrapolations are interpreted as innumeracy [7]. But since they are imprecise statements, they can sometimes be correct. But what else contributes to incorrect projections?

The other phenomenon that drives reality away from predictions is phase change behavior. Currently, there is a debate as to whether or not the era of economic growth is over in the developed nations. This involves more than just getting the statistical trend correct. It involves getting the transition from one historical phase (growth interspersed with periods of decline) to another phase (dominant periods of flat growth).

Futurism demonstrates the relatively high failure rate of projective models. A recent Paul Krugman blog article [8.1] mentions Herman Kahn's list of predictions for the Year 2000, circa 1967 [9]. Many of these predictions are incorect, but approximately 26% were fulfilled in one way or another. The reasons for this somewhat low success rate and the nature of fulfilling a prediction have led me to come up with three criteria for judging whether or not a prediction is likely to be wrong, fulfilled, or partially fulfilled on a short or moderate historical timescale [10].

What makes a technological prediction fundamentally incorrect?

I will now introduce three reasons why I think future technological projections are often incorrect. These involve historical and physical factors that are often at odds with human intuition and the conception of a mental model:

1) No technological or historical precedent at the time of prediction. If there is no precedent for world peace, why would you predict it to be so fifty years in the future? The amount of development needed not only to make something a reality but to produce it in a replicable fashion (or at the appropriate economy of scale) will roughly determine the amount of time needed to realize the prediction. A related issue involves historical contingency (sometimes also referred to as "lock-in"). It is often easy to predict gains in an existing technological framework [11]. It is much harder to imagine an entirely new paradigm. The movie "Back to the Future, Part II" features an example of this. The movie, partially set in 2015, not only featured flying cars, but a fax machine in every room of the house.

The subject of [8.1, 8.2] is the self-driving car, which was envisioned in the 1990 version of "Total Recall". Now, of course, the self-driving car is becoming a reality. But consider that many of the component technologies that enable the self-driving car (e.g. linear filter, computer vision, GPS) have been around for a few decades. And related varieties of autonomous robot are in the process of changing the economic and social landscape [8.3]. It is incremental developments along that trajectory that have enabled the self-driving car rather than de novo innovation. This is not to say that de novo innovations do not occur or have an effect on the future. Indeed they do, as the rise is innovations surrounding the internet (e.g. social media, online shopping) demonstrate. However, even here, such innovations are dependent on a trajectory of technological advancement and cultural imagination [12].

2) Prediction requires a relatively high energetic threshold (e.g. flying cars). Why don't we have flying cars yet? Or better yet, why are big predators so rare [13]? The answer, or course, involves the energetic requirements involved. If the energetic requirements for a technology are very high (e.g. warp drive), the less likely they will be conceived or developed without an accompanying source of energy. This has, of course, has been the limiting factor for the development of long-range electric cars. How can ecological and predictive models help us understand why projective models can often fail?

In [13], it is argued that ecological constraints (namely, the big predators' energetic footprint) prevent large animals from becoming too numerous. And so it is with the development of technologies that require a high energy density [14]. As much as I think Moore’s Law analogies are severely overused, one is actually appropriate here. According to Moore’s Law, innovations have been able to make the size of transistors decrease by half linearly over time. However, this trend is now being threatened by a fundamental size limitation. Similarly, there exist fundamental energetic limitations to many technologies, and make advances that require high energy requirement improbable.

How big is too big? Can things be scaled up infinitely? Or are there clear energetic limits to realizing certain technologies?

3) Prediction is on a highly complex system (e.g. diagnosis of disease, cyborgs). In general, technologists are either dismissive of complexity or treat it as a quasi-religious mystery. We know from recent unpleasantness in financial markets that complexity can wreak havoc on predictive models. The effects of complexity on projective models are even more problematic. Sometimes, the system that is supposed to be conquered or created to generate the prediction is much more complex than previously assumed.

One example of this comes from the promises made after the initial draft of the human genome was introduced in 2001 [15]. The sequencing work was done faster than expected, and it was assumed that sequence data could provide the necessary information for a curing most diseases in a short period of time. The tenth anniversary was marked with a NYT article [16] contemplating why a lot of the early predictions either never came to fruition or are slow in coming true. What was not taken into account in these early predictions was the sheer complexity of human physiology and its role in disease.

I woke up and discovered complexity! Examples from social (top) and physical (bottom) systems.

4) The Nostradamus Effect: when projective models go bad? The Nostradamus effect can be defined as a prediction that consists of vague, verbal statement can be easily fit to the empirical world. The Nostradamus effect is essentially the opposite of the traditional scientific method. Generally, scientific inquiry proceeds from empirical observations, which result in a theory. The Nostradamus effect uses vague hypotheses that are then matched to future events.

I remember when I was about 10 years old, I was impressed by Nostradamus' predictions, which I was introduced to by a TV presentation by Orson Wells [17]. This inspired a trip to the local library, where I checked out an abridged version of Nostradamus. What I found was underwhelming: his predictions consist of vague axioms which can be applied to any number of scenarios. In the world of predictive models, such a model would be both noisy and prone to false positives.

"Measuring" the future -- a losing battle? Will being more precise in our projections help or hurt their accuracy?


One way to improve the accuracy of projective models is to work towards a semantically-oriented model based on a hybrid Markovian-Bayesian architecture. This would allow us to approximate the current state of the technological innovation as a series of states, but also takes into account conditional information, which accounts for unexpected events. Of course, we would also have to include a number of variables accounting for technological shifts (sudden and gradual), and their underlying causes. Even this would probably not be sufficient for dealing with unpredictable events, and so would also require a stochastic or chaotic component to generate scenarios consistent with the current limitations of future projections.

Even the most elaborate and efficient of models would most likely only make us marginally better than Nostradamus at projecting future trends. Therefore, a way to merge predictive and projective models in a way that would benefit each would be to acquire long historical (or time-series) datasets in the same way we currently acquire (and hype) "big" data [18]. Such an approach will give us both a quantitative and qualitative appreciation of technological evolution that is sometimes missing from many futurist-type predictions.


NOTES:
[1] “The AI Behind Watson”. AAAI technical paper.

[2] Economist article ("This is Your Ground Pilot Speaking") on autonomous aircraft.

[3] Dayan, P. and Abbott, L.F.  Theoretical Neuroscience. MIT Press (1998). AND Hawkins, J. and Blakeslee, S. On Intelligence. Times Books, New York (2003).

[4] For examples of future projections and how they play out in history, see the following features:

1) IEEE Spectrum. Feature: Life in 2030. Podcasts and Videos.

2) Armstrong, S.  Assessing Kurzweil: the results. Less Wrong blog. January 16. (2013)

3) What 2012 Stuff Will Seem Crazy in 2060? The David Pakman Show, YouTube. January 4 (2013).

[5] Taleb, N.N.  The Black Swan: the impact of the highly improbable. Random House, New York (2010).

Visions of the future that involve utopian or dystopian settings are often based on over-interpreting the effects of these large-scale shifts. Especially in terms of the proposed "comeuppance" scenario, a common science fiction trope involves a dystopian outcome from the extreme development of technology (e.g. "1984", "Animal Farm").

[6] See these Wikipedia pages on Statistical Assumptions and Economic Forecasting. And then read these blog posts on economic projections and weather forecasts:

1) Krugman, P.  The Mostly Solved Deficit Problem. Conscience of a Liberal blog. January 10 (2013).

2) Krugman, P.  Future Inequality, according to the CBO. Conscience of a Liberal blog. December 27 (2012).

3) Robson, D.  How good are the Weather Channel's predictions? Short Sharp Science blog. February 6 (2009).

[7] Seife, C.  Proofiness: the dark arts of mathematical deception. Viking Press, New York (2010).

[8] A series of blog posts on the potential of autonomous intelligence and it's effects on the economy and our society in general. This topic is becoming a popular one -- the following are merely a starting point:

1) Krugman, P.  Look Ma, No (Human) Hands. The Conscience of a Liberal blog, January 25 (2013).

2) Thrun, S.  What we're driving at. Official Google blog. October 9 (2010)

3) Kaminska, I.  The Tech Debate Blasts Off. Towards a Leisure Society blog. December 28 (2012).

[9] re-visited and rated courtesy of Leonard Richardson's Crummy blog. Also see his piece on "The Future: a retrospective" (reflections on the the book "Future Stuff").

[10] My definition of short to moderate historical timescales are on the order of 20 to 200 years. This time window can vary based on both context and the current point in time relative to critical sociohistorical events, cultural change, technological revolutions, etc.

[11] Or perhaps not. A good historical yardstick for this is the book "Future Shock" by Alvin Toffler (first published in 1970). Upon reading it in 2013, is it utopian, dystopian, or accurate (or elements of all three)?
Also, here is a link to a documentary on Future Shock narrated by Orson Wells (from 1972).


[12] For more information, please see the following lecture by David Graeber entitled "On Bureaucratic Technologies and the Future as Dream-Time", which discusses the role of existing bureaucracies (social structures) and cultural imagination in technological innovation.

[13] This paraphrases a title of a book by Paul Colinvaux: "Why are Big Fierce Animals Rare?", Princeton University Press, 1979.

[14] A related problem is the improvement of existing technologies, such as more sustainable energy sources for jet aircraft and rockets. Please see this ASME Knowledgebase entry for more information.

[15] International Human Genome Sequencing Consortium  Initial sequencing and analysis of the human genome. Nature, 409, 860-921 (2001) AND Venter, C. et.al  The sequence of the human genome. Science, 291(5507), 1304-1351 (2001).

[16] A series of New York Times articles associated with the 10th anniversary of the draft full human genome sequence:

1) Pollack, A.  Awaiting the Payoff. NYT, June 14  (2010).

2) Wade, N.  A Decade Later, Genetic Map Yields Few New Cures. NYT, June 12  (2010).

3) Editoral: The Genome, 10 Years Later. NYT, June 20  (2010).

[17] "Nostradamous: the man who saw tomorrow". Narrated by Orson Wells (circa 1981). Watch on Vimeo.

[18] Arbesman, S.  Stop Hyping Big Data and Start Paying Attention to ‘Long Data’. Social Dimension blog, January 29.

Printfriendly