August 27, 2013

Evolutionary Models from the Reading Queue

As my reading queue is always bigger than my attentional throughput, I have decided that when I hit upon a theme, I will blog about the papers involved. In this post, recent papers from PLoS Computational Biology and PLoS One illustrate three instances of evolutionary modeling, which I will provide a guided tour of for a general audience.

What is an evolutionary model? Since evolution is hard to observe in most cases, we require models to fully appreciate what it means to evolve. While we could merely observe extant organisms and ruminate on the adaptive significance of specific traits, a more complete picture can be gathered from fossils [1] and comparative anatomy. Yet these are static evolutionary models -- static in the sense that all change (e.g. dynamics) is inferred from the observed data.

Can we improve upon this? The evolutionary analysis of genomes and phylogenetic simulations are quasi-static models -- models that work with more information but still rely significantly on dynamic inference. Not that there's anything wrong with that. But we could also simulate the organisms and conditions under which evolution operates. This can provide us with general principles that can corroborate (or in some cases reinterpret) our inferential endeavors.


What is an evolutionary model? Here are some static evolutionary models (a.k.a. fossils). Pictures from the Florida Museum of Natural History (FLMNH) Hall of Fossils.

Using mathematical and computational techniques, models can be used to support hypotheses and ideas related to evolutionary acceleration, complexity, and tradeoffs. While the models may look nothing like the traditional forms of evidence, they may nevertheless provide insight into the fundamental mechanisms of evolution.

1) Evolutionary "acceleration": In "Epigenetic Feedback Regulation Accelerates Adaptation and Evolution" [2], the authors use simplified gene regulatory network models with epigenetic inputs to model a phenomenon called epigenetic feedback regulation (EFR). This is done by modeling three scenarios using differential equations: noise-driven adaptation with EFR, EFR without growth dependency, and EFR under evolution.

Gene expression dynamics as represented in [2]. A dynamic model, as opposed to a fossil.

In the first case (noise-driven adaptation with EFR), a property called growth is tied explicitly to gene expression patterns (a direct result of EFR). The population of networks initially exhibits itinerant growth dynamics (e.g. fast, then slow, then fast again). Over time, the population settles into a high growth regime.

When growth and expression patterns are decoupled (made independent of one another), active growth can be achieved without gene expression being distributed across too many attractor basins. Put another way, gene expression will tend to be more coherent and can be synchronized across the network.

Finally, in the case of EFR under evolution, populations of networks are evolved and compared with network that are generated randomly. Evolution allows a greater proportion of networks to exhibit a steady growth rate (Figure 1), which suggests that the effects of EFR seen in the non-evolutionary cases also play a role in evolution with natural selection.


Figure 1. Figure 6 from [2]. Green function represents the effects of evolution over randomly-assembled networks.


2) Evolutionary increases in complexity: In "The Minimal Complexity of Adapting Agents Increases with Fitness" [3], the authors use an animat (artificial life agent) brain to model the hypothesis that the complexity of a population increases over evolutionary time. This study is based on the premise that adaptation is a multivariate process which occurs at multiple time-scales. So during the course of evolution we should expect some traits in a lineage to evolve faster than others.

We should also expect a generalized increase in complexity, as this differential adaptation results in more moving parts (so to speak). In this case, the investigation is restricted to an animat population's neural complement (Figure 2, top), which is represented using twelve binary variables (similar in composition to phylogenetic character states). The animat population is evolved over 60,000 generation. The resulting complexity is evaluated statistically using mutual information and an integrated information measure (Figure 2, bottom) related to Giulio Tonini's Phi parameter.

Figure 2. Figures 2 (top) and 3 (bottom) from [3]. Animat architecture (top) and fitness measured against complexity (as characterized by mutual information) over evolutionary time (bottom).


3) Evolutionary tradeoffs: In "Evolutionary Tradeoffs between Economy and Effectiveness in Biological Homeostasis Systems" [4], a multi-task (e.g. Pareto) optimization approach is used to bridge the effectivity and economical aspects of an evolving physiological system. Effectivity refers to functional coherence during the performance of a task, while economy refers to doing things like repair or investment without incurring a high fitness cost.

For the uninitiated, Pareto optimality is a situation where the allocation of resources between the two tasks is a non-zero sum outcome. Assuming Pareto optimality is a possible condition, this leads to a set of best compromises between the two tasks (Figure 3). This has relevance to the function of homeostatic (e.g. regulatory) mechanisms, and potentially the evolvability and adaptability of these mechanisms.


Figure 3. Figure 4 form [4]. The relationship between effectiveness (a) and economy (b), expressed as points along a Pareto front (black function, c).


For the skeptics who don't see the relevance of these papers to evolutionary science, I should point out that these models are not intended to mimic real biology or actual organisms. In this sense, the models above might be viewed as useless curiosities. However, modeling is not about fully replicating biology. Rather, good models should approximate key parameters (e.g. those that explain the most variance) in a process.

Perhaps it is with irony the best biological models might actually be considered "false" models. The intentional use of false models [5] has a significant history in the modeling of biological complexity. These false models (according to [5] there are seven types) include those with the following attributes:

1) Models that have very local applicability. While they lack generalizability, they do describe local phenomena well. These type of models might be employed to understand unique phenomena. This type of model can also be contrasted with models that overly idealize nature (next point).

2) Models that are an idealization of nature. Neural Network models fall into this category. The only properties of the brain that matter are neurons, their connections, and a mechanism for excitability. Ignoring all other complexity in the brain still gives us a somewhat-useful model of cognition.

3) Models that are incomplete but causally relevant. If a tree falls in the woods because it has been struck by lightning and its innards consumed by termites, and if you only observe the tree falling down in a slight breeze, you would conclude that the slight breeze caused the tree to collapse. One causal factor (and in some cases an important one), but not the entire story [6].

4) Models that intentionally mis-describe interactions between variables (e.g. spurious context independence, reductionist bias). In the service of seeking causality, important (and often critical) interactions between variables are overlooked. In reductionist science, the focus on one or two variables (e.g. finding a gene responsible for x) in the face of great complexity is another version of this point. While these few variables may describe much of the variance, oftentimes they do not.

5) Models that are fundamentally wrong-headed descriptions of nature. This becomes an issue when models adopted for the first four points are greatly successful, and their adoption/use becomes self-perpetuating. Intuitive or naive models (e.g. models that sound consistent with intuition but are not supported by evidence) also fall into this category.

6) Models that are purely phenomenological in nature (e.g. genome annotation). While this type of model is useful for understanding the structure of a problem, it is hard to elucidate function using the same model. This is of course true when a model lacks predictive power. However, since our knowledge of most complex systems is incomplete, purely phenomenological models are in essence false (but often useful).

7) Models that fail to describe or predict the data correctly. The use of curve-fitting techniques and characteristic functions falls into this category. While characteristic functions are useful approximations of mean behavior, it does not describe the natural variation well. While much effort is put into outlier detection, less effort is put into understanding the relative significance of outlier data points.

In conclusion, these examples also reveal two things about evolutionary models:

* Any single model cannot be an all-purpose tool. As shown in the examples above, a single model might be very good at modeling a specific phenomenon (e.g. the relationship between gene regulation and adaptation), but not at all relevant to other aspects of evolution (e.g. evolutionary divergence).

* Dynamic models, like fossils (e.g static models), are incomplete. This does not imply a fault in one line of evidence or another. Rather, it is suggestive of their cooperative role in our understanding of evolutionary processes.

NOTES:

[1] For more information on how fossils can be used as evolutionary models, please see the GB3D Fossil Database.

[2] Furusawa, C. and Kaneko, K.   Epigenetic Feedback Regulation Accelerates Adaptation and Evolution. PLoS One, 8(5), e61251 (2013).

[3] Joshi, N.J., Tononi, G., and Koch, C.   The Minimal Complexity of Adapting Agents Increases with Fitness. PLoS Computational Biology, 9(7), e1003111 (2013).

[4] Szekely, P., Sheftel, H., Mayo, A., and Alon, U.   Evolutionary Tradeoffs between Economy and Effectiveness in Biological Homeostasis Systems. PLoS Computationakl Biology, 9(8), e1003163 (2013).

[5] Wimsatt, W.   False Models as a Means to Truer Theories. Chapter 2 in Nitecki, M.H. and Hoffman, A. Neutral Models in Biology. Oxford, New York (1987).

[6] Nielsen, M.   If correlation doesn't imply causation, then what does? Data-driven Intelligence (DDI) blog, January 23 (2012).

August 20, 2013

Fear and Loathing in Robotistan

Do you fear your (future) robot overlords? In a recent Mashable op-ed [1], John Havens argued that we should fear the future of artificial intelligence, if only for it's propensity to get things wrong and our propensity to put too much trust in the machine's output. Another emerging theme in popular culture, from fear of the coming singularity [2] to fear of the deleterious impact robots will have on job growth [3], is something I will call robo-utopianism and robo-angst, respectively.

Ken Jennings. One man who welcomes our new robotic overlords.

Is robo-angst a general fear of the unknown? Or is it a justified response to an emerging threat? I would argue that it is mostly the former. In a previous Synthetic Daisies post critiquing futurism, I postulated that predicting the future involves both unbridled optimism and potential catastrophe. While some of this uncertainty can be overcome by considering the historical contingencies involved, the mere existence of unknowns (particularly if they involve intangibles) drive angsty and utopian impulses alike.

Both of these impulses are also based on the nature of modern robotic technology. Perhaps due to our desire to cheaply replicate a docile labor force, robots represent intelligent behavior that is ultra-logical, but not particularly human [4]. Perhaps the other aspects of human intelligence are hard to reproduce, or perhaps there is indeed something else at work here. Nevertheless, this constraint can be seen and nature of tests for sentience such as the Captcha (Turing test-like pattern recognition in context) to distinguish humans from spambots.

Examples of Captcha technology. COURTESY: captcha.net

So how do we go about achieving sentience? As robo-utopians would have it, this is the next logical step in artificial intelligence research, requiring only natural increases in the current technology platform given time. Does becoming sentient involve massive increases in the ultra-logical paradigm, massive increases in embedded context, or the development of an artificial theory of mind? And if making robots more human requires something else, do we even need to mimic human intelligence?

Perhaps part of the answer is that robots (physical and virtual) need to understand humans well enough to understand their questions. A recent piece by Gary Marcus in the New Yorker [5] posits that modern search and "knowledge" engines (e.g. Wolfram|Alpha) can do no better than chance (e.g. robo-stupidity) for truly deep, multilayered questions that involve contextual knowledge. 

When robots do things well, it usually involves the aspects of human cognition and performance that we understand fairly well, such as logical analysis and pattern recognition. Much of the current techniques in machine learning and data mining are derived from topics that have been studied for decades. But what about the activities humans engage in that are not logical? 

Example of the biological absurdity test.

One example of adding to the ultra-logical framework comes from social robotics and the simulation of emotional intelligence [6]. But animals exhibit individual cognition, social cognition, and something else which cannot be replicated simply by adding parallel processing, emotional reflexivity, or "good enough" heuristics. What's more, the "logical/systematic" and "irrational/creative" aspects of human behavior are not independent. For better or worse, the right-brained, left-brained dichotomy is a myth. For robots to be feared (or not to be feared), they must be like us (e.g. assimilated).

Examples of machine absurdity. TOP: an absurd conclusion from a collection of facts, BOTTOM: deep irony and unexpected results, courtesy of a recommender system.

Perhaps shared cultural patterns among a group of robots, or "cultural" behaviors that are nonsense from a purely logical perspective and/or traditional evolutionary perspective. Examples include: the use of rhetoric and folklore to convey information, the subjective classification of the environment, and conceptual and axiomatic blends [7]. 

How do you incorporate new information into an old framework? For humans, it may or may not be easy. If it falls within the prevailing conceptual framework, it is something humans AND robots can do fairly well. However, when the idea (or exemplar in the case of artificial intelligence) falls outside the prevailing conceptual framework, we face what I call the oddball cultural behavior problem

Take ideas that lie outside the sphere of the prevailing conceptual model (e.g. spherical earth vs. flat earth, infection vs. pre-germ theory medicine) as an example. These ideas could be viewed as revolutionary findings, ideas at odds with the status quo, or as crackpot musings [8]. The chosen point-of-view is informed either by naive theory (e.g. conceptual and axiomatic blends) or pure logical deduction. Regardless of which is used, when the number of empirical observations in a given area is largely unknown, the less tied to formal models the arguments become, and wild stories may predominate. This may explain why artificial intelligence sometimes makes nonsensical predictions, or why humans sometimes embrace seemingly nonsensical ideas.

Incorporating new information into an old framework, a.k.a. the oddball cultural behavior problem. When the idea falls well outside of the existing framework, how is it acted upon?

In some cases, oddball cultural behavior is classified using conceptual blends (or short-cuts) [9] are used to integrate information. This is similar but distinct from how heuristics are used in decision-making. In this case, cultural change (or change in larger context/structures) is regulated (implemented in a combinatorial manner) by these short-cuts. One might use a short-cut (more flexible than changing a finite number of rules) to respond to the immediate needs of the environment, but because it is not an exact response, the cultural system overshoots the optimal response, thus requiring additional short-cuts.

Moving on from what robots don't do well, some of the robo-angst is directed towards the integration of people and machines (or computation). The discussion in Haven's op-ed about Steve Mann might be understood as radically-transparent ubiquitous computing [10]. Steve Mann's experience is intriguing for the same reasons that human culture is a selectively-transparent ubiquitous framework for human cognition and survival. The real breakthroughs in autonomous intelligence in the future might only be made by incorporating radically-transparent ubiquitous computing into the design of such agents.

When tasks require intersubjective context, it is worth asking the question: which is funnier to the professional clown? A robotic comedian? Perhaps, but he's not quite skilled in the art. COURTESY: New Scientist and Dilbert comic strip.

Why would we want a robot that makes rhetorical slogans [11]? Or a robot that uses ritual to relate with other robots? Or a denialist [12] bot? Before the concurrent rise of big data, social media, and machine learning, the answer might be: we don't. After all, a major advantage of robots is to create autonomous agents that do not exhibit human foibles. Why would we want to screw that up?

However, it is worth considering that these same expert systems have uncovered a lot of aggregate human behavior that both violate our intuition [13] and are not something to be proud of. These behaviors (such as purchasing patterns or dishonesty) may not be optimal, yet they are the product of intelligent behavior all the same [14]. If we want to understand what it means to be human, then we must build robots that engage in this side of the equation. Then perhaps we may see the confluence of robo-angst and robo-utopia on the other side of the uncanny valley.

NOTES: 

[1] Havens, J.   You should be afraid of Artificial Intelligence. Mashable news aggregator, August 3 (2013).

[2] Barrat, J.   Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dune Books (2013).

[3] Drum, K.   Welcome, robot overlords. Please don't fire us? Mother Jones Magazine, May/June (2013) AND Coppola, F.   The Wastefulness of Automation. Pieria magazine, July 13 (2013).

For a fun take on this, see: Morgan R.   The (Robot) Creative Class. New York Magazine, June 9 (2013).

[4] Galef, J.   The Straw Vulcan: Hollywood's illogical appraoch to logical decisionmaking. Measure of Doubt Blog, November 26 (2011).

[5] Marcus, G.   Why can't my computer understand me? New Yorker Magazine, August 16 (2013).

For a take on recommender systems and other intelligent agents gone bad (e.g. the annoying valley hypothesis), please see: Moyer, B.   The Annoying Valley. EE Journal, November 17 (2011).

[6] Dautenhahn, K., Bond, A.H., Canamero, L., Edmonds, B.   Socially Intelligent Agents. Kluwer (2002).

[7] Fauconnier, G. and Turner, M.   The Way We Think: Conceptual Blending And The Mind's Hidden Complexities. Basic Books (2013) AND Sweetser, E.   Blended spaces and performativity. Cognitive Linguistics, 11(3-4), 305-334 (2000).

[8] For an example of oddball and potentially crackpot ideas in science, please see: Wertheim, M.   Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything. Walker & Company (2011) AND Horgan, J.   In Physics, telling cranks from experts ain't easy. Cross-Check blog, December 11 (2011).


[9] Edgerton, R.B.   Rules, Exceptions, and Social Order University of California Press, Berkeley (1985).

[10] For an interesting take on Steve Mann's approach to Augmented Reality and its social implications, please see: Alicea, B.   Steve Mann, misunderstood. Synthetic Daisies blog, July 18 (2012).

[11] Denton, R.E.   The rhetorical functions of slogans: Classifications and characteristics. Communication Quarterly, 28(2), 10-18 (1980).


[13] For an accessible review, please see the following feature and book: 

Lohr, S.   Sizing up Big Data, Broadening Beyond the Internet. Big Data 2013 feature, New York Times Bits blog, June 19 (2013).

Mayer-Schonberger, V. and Cukier, K.   Big Data: A Revolution That Will Transform How We Live, Work, and Think. Houghton-Mifflin (2013).

[14] Similar types of behaviors (e.g. the Machiavellian Intelligence hypothesis) can be seen in non-human animal species. For classic examples from monkeys, please see: Byrne, R.W. and Whiten, A.   Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press (1989). 

August 16, 2013

Fun and (research) Profit with Internet Memes

Here are several more posts from my micro-blog and conceptual proving ground, Tumbld Thoughts. This series is all about internet memes. The first feature (The white flag of reductionism........is my reflections on what drives complex processes like history and evolution, using a popular internet comic as inspiration. The second feature (What comes next, a pattern recognition test......) is a matching game I made from a meme involving the failed restoration of a religious fresco and various cultural references. The final feature of this post (The role of variation in memetic evolution) highlights a new study on the evolution of and variation inherent in internet memes. In the name of scientific and creative realism, all profanity is left in its original context.

I. The white flag of reductionism........

Figure 1. The "most important driver is random shit" meme.

Inspired by the "most important driver is random shit" meme (Figure 1) [1], I have modified the cartoon to more accurately reflect the rhetorical message of the original [2]. While randomness can play a role in processes such as history, evolution, and other complex systems [3], there are at least two alternate hypotheses:

A) The image in Figure 2 features the first of these (subtle factors), which suggests that many hidden and unknown variables are responsible.

Figure 2. Cartoon for the subtle factors hypothesis.

B) The image in Figure 3 features the second (multivariate causality), which partitions the "random shit" category into many candidate causal factors, each contributing different amounts to the entire picture [4].

Figure 3. Cartoon for the multivariate causality hypothesis.

II. What comes next, a pattern recognition test......

Figure 4. Pick one image from the bottom row that completes the sequence in the top row.

Your task is to pick the next logical image in a sequence of images (see Figure 4). Choose among the options (A, B, and C) below. Creative extrapolation is required.

This exercise is brought to us by Cecilia Gimenez, the Sock MonkeyMuno from the TV show Yo Gabba Gabba, and Ted Kacynski. Thanks for playing!

III. The role of variation in memetic evolution


Here is an interesting new arXiv paper [5] on internet memes. In the study, Michele Coscia [6] statistically analyzes and builds a predictive model (Figure 5) that characterizes the interactions between and variability among memes (Figure 6). This is treated independently of results from a more typical approach, which is to observe the propagation of memes in a social network. 

Figure 5. A figure from the paper showing a hierarchical statistical analysis of a meme population.

The research relies upon treating memes as analogous to genes (biological units of inheritance) [7], and raises some interesting points. For example, memes appear to behave collectively and over time in ways similar to genes, including exhibiting "selfish gene"-like behavior [8], functional collaboration, and can even become functionally integrated into "organisms" [9].

Figure 6. A demonstration of natural variation via cultural evolutionary processes in the "Annoyed Picard" meme.

Update (9/27): Here is a nice article in the latest issue of Nautil.us, critically evaluating the meme concept [10].



NOTES:

[1] Traced back to: Weiner, Z.   What drives history. Saturday Morning Breakfast Cereal comic, April 15 (2013). Can also be found on a number of other blogs and with slight variations in the wording.

[2] this is basically an argument against essentialism, and seems to be relatable across a number of reposts.

[3] Even the notion of "random shit" can be a bit more experimentally-tractable. In the case of gene expression, randomness can be partitioned into intrinsic and extrinsic variation. For more information, please see: Elowitz, M.B., Levine, A.J., Siggia, E.D., and Swain, P.S.   Stochastic gene expression in a single cell. Science, 297(5584), 1183–1186 (2002).

[4] For an example of the subtle factors hypothesis using the fall of the Roman Empire, please see: Arbesman, S.   210 Reasons for the Fall of the Roman Empire. Social Dimension blog, June 26 (2013).  


[5] Coscia, M.   Competition and Success in the Meme Pool: a Case Study on Quickmeme.com. arXiv: 1304.1712 [physics.soc-ph]

[6] Coscia, M.   Memes and Cultural Organisms. Follow the Crowd blog, June 25 (2013).

[7] this is often problematic, as the meme-as-gene metaphor may break down given complexity and the transfer to cultural system dynamics. For more on the meme-as-gene metaphor, please see: Blackmore, S. The Meme Machine. Oxford University Press, Oxford, UK (2000).

For more on the the notion of dual (meme-gene) inheritance, please see: Richerson, P.J. and Boyd, R.   Not By Genes Alone: How Culture Transformed Human Evolution. University of Chicago Press, Chicago (2005).

[8] For more on the selfish gene hypothesis, please see: Dawkins, R.   The Selfish Gene. Oxford University Press, Oxford, UK (1976).

[9] To a biologist, these terms will seem misused. The terminology is analogical, but the basic structure of units of inheritance and selection are closer to what is seen in biology.

[10] Rabinowitz, A.   The Meme as Meme. Nautil.us, Issue 5 (2013). For more on the status of memetics as a science (a bit dated now, but still intellectually relevant), please see: Aunger, R.   Darwinizing culture: the status of memetics as a science. Oxford University Press (2000).

August 6, 2013

Thought (Memetic) Soup, August edition

Here is the latest set of loosely-related features from my micro-blog, Tumbld Thoughts. Another installment of the Thought (Memetic) Soup concept I debuted last month. This post features topics such as Discovering Scaling Laws in Human Innovation (I), Argument to Social Moderation (II), and Towards a Subjective Information Theory (III). A set of creative and edgy (at least from an academic perspective) posts at the intersection of human culture, technology, and complexity theory.

I. Discovering Scaling Laws in Human Innovation

Here is a feature from IEEE Spectrum [1] on Santa Fe Institute's Performance Curve Database (PCDB). The picture above is from the IEEE Spectrum article, and features four technologies (dynamic RAMDNA sequencingphotovoltaics, and wind power) that conform to Moore's Law (e.g. get cheaper, smaller, better over time) [2].


The Moore's Law-like linear scaling in these four technologies is not without variation, and this variation [3] is related to platform changes, material availability, and other factors. The picture below is from the PCDB website, and features raw data behind the trend in pricing decreases for wind power over time [4]. 



II. Argument to (Social) Moderation


Here are a few images, thoughts, and readings related to the concepts of the political "center" and the organizational power of "majority rules", discussed in a recent Washington Post article [5]. Are there inherent cognitive tendencies that define left, right, and center, or do ideas get sorted by cultural and historical context (e.g. maintenance of binary oppositions)


Perhaps a better question is whether or not the "center" is the best possible political philosophy, given that it is often the product of consensus and compromise. This is of course the idea behind radical centrism, but also consistent with arguing to moderation (a well-known logical fallacy). 

However, moving too far in one direction or another away from the center also poses a number of problems. The psychology of extremism suggests that while leftist and rightist tendencies may be rooted in psychological tendencies [6], extremism itself may be the product of general cognitive biases [7].


There is also a more interesting set of relationships between the right-left dichotomy and a dichotomy between strict majoritarianism-radical pluralism philosophies. While majoritarianism is the standard in most democracies, it often fails in culturally diverse societies [8]. The alternative would be pluralism, but pluralism may or may not be consistent with the principles of pragmatism (e.g. centrism).


III. Towards a Subjective Information Theory


This is a rose, this is not a pipe, and there is no spoon. What does this mean for me [9]? Aside from the possibility matrix shown above, Rodney Needham [10] introduces us to cultural subjectivity by describing Wittgenstein's triangle. This [11] can be defined as objects that do not have a genuinely proper description. Consider a rose as an example:

1) this is a rose (object recognition).

2) this rose has five pedals (classification).

3) this rose is red (color perception).

4) this rose smells sweet (olfactory perception).

5) this rose is beautiful (subjective judgement).

These statements can be grouped (1, 2, 3-4, and 5) with respect to increasing amounts of subjectivity. Such increasing subjectivity leads to both intra- and inter-cultural diversity. This suggests that something called subjective information theory (SIT) might be possible [12].

In SIT (which is a hypothetical theory), subjectivity would increase as the configurational entropy (e.g. a greater number of possibilities) of the imagination increases. SIT would also require some form of data fusion [13], which would combine the possible answers for each statement above.

NOTES:

[1] Elert, E.   Tech Trajectories: four more Moore's Laws. IEEE Spectrum, July 26 (2013).

[2] Moore's law was originally applied to semiconductor size, power and pricing with respect to time. This linear scaling is dependent upon a certain device physics and material composition (and nothing else).

[3] these include changes in the linear function (e.g. plateaus) or the rate of change (e.g. doubling time).

[4] there is a wealth of performance curve data for many different technologies at the website.


[5] Klein, E.   There's no such thing as "the center". Washington Post, July 25 (2013). Cartoon (middle image) is a KAL cartoon (courtesy of The Economist).

[6] Mitroff, I.I.   The Psychology of Extremism. HuffPo blog, August 30 (2011).

[7] Fernbach, P.M., Rogers, T., Fox, C.R., and Sloman, S.A.   Political Extremism Is Supported by an Illusion of Understanding. Psychological Science, 24(6), 39-946 (2013).

[8] Editorial   Majoritarianism: Zombie democracy. Economist, June 22 (2013).

[9] the rose reference is featured in note [10]. "This is not a pipe" is in reference to Magritte's "Treachery of Images". "There is no Spoon" is a reference from "The Matrix" (the first movie).

[10] Needham, R.   Against the tranquility of axioms. University of California Press, Berkeley (1983).


[12] One version of subjective information is discussed in Chapter 5: Jumarie, G.   Relative Information. Springer Series in Synergetics, 47 (1990). In this reference, standard Shannon entropy is modified to account for symbols and meanings (semantics).

Another version can be found here: Tononi, G.   Phi: a voyage from the brain to the soul. Random House (2012). Here, information theory is used to account for neuronal diversity and the variety of mental states in conscious experience.

[13] For more information, please see: Bouchon-Meunier, B.  Aggregation and Fusion of Imperfect Information. Springer-Verlag, Berlin (1998).

August 1, 2013

Universal Patterns and Origins of Innovation

Here are two recent posts on innovation originally featured on my micro-blog, Tumbld Thoughts. Each post reviews a comtemporary book on the patterns inherent in the innovation process. The first (I) features several different archetypes, while the second (II) features the kinds of environments that are key for maximizing innovation.

I. Universal Patterns of Innovation


Interesting book I ran across recently on the universal "patterns" that seem to underlie innovation and invention [1]. While the book is about much more than this, one core theme is the practice of discovery and what we might learn from looking at the practices of different inventors. 


One way to take advantage of these patterns is to learn the rules of innovation. These rules are defined as the underlying talent, knowledge, and allocation of resources neccessary for innovation. Another lesson learned is to recognize there are at least three principles (better understood as personal styles) that define great inventions. These are:

1. Serendipity, or being able to exploit chance discoveries. William Shockley's work with semiconductors (leading to the transistor) best exemplifies this principle.

2. Proof-of-principle, or the 99% perspiration, 1% inspiration approach. Thomas Edison's work on the incandescent lightbulb best exemplifies this principle.

3. Inspired Exertion, or the greater than 1% inspiration approach. Jeff Hawkins' work in developing the Palm mobile computer best exemplifies this principle.

The third lesson that leads us to innovation is to study the designs of great innovators. See these Synthetic Daisies post from 2009 and 2011 on my own (evolving) thoughts on this topic.


II. The Origins of Innovation


Here is a link to a sped-up whiteboard animation video featuring content from Steven Johnson's book "Where Good Ideas Come From: the natural history of innovation" [3]. His main thesis is that innovation tends to occur in connected spaces such as cities, reefs, and webs [4]. As mentioned at the end of the video: "chance favors the connected mind".



Innovation also occurs as a process. One of these processes is called the slow hunch. The example Johnson gives for this is Tim Berners Lee and invention of the internet. At first, the proto-internet was conceived as a way to organize personal data. The next stage involved extending the connectivity aspect to interpersonal tangles. Finally, a version of the internet we all recognize came to fruition as a dynamic set of interconnected documents and links. This process of successive iteration took years to achieve.


NOTES:

[1] Alesso, H.P., Smith, C., and Burke, J.   Connections: Patterns of Discovery. Wiley/IEEE Press (2008).

[2]  Alicea, B.   Innovation Class/Book. Synthetic Daisies blog, June 30 (2009) AND Alicea, B.   In praise of repetition? Synthetic Daisies blog, April 11 (2011).

[3] Also see his TED talk on the book. For more sped-up whiteboard animations on innovation and the process of invention, please see: Alicea, B.   New Directions in Making Innovation Pay. Synthetic Daisies blog, June 1 (2013).

[4] The city is a literal city (particularly the mixing that occurs on city streets), the reef is a space that metaphorically resembles a coral reef (diverse individuals visit to feed and mingle), and the web is a network (made explicit in the internet).

Printfriendly