Showing posts with label enculturation. Show all posts
Showing posts with label enculturation. Show all posts

November 16, 2014

Thought (Memetic) Soup: November edition

This content is cross-posted to Tumbld Thoughts. Here are a few short observations on the state of the world and data, circa Summer 2014. Haven't gotten around to cross-posting these yet. The meta-theme is social disruption, evolutionary change, and economic dynamics, in spite of ideonational bias. These include Disruption du jour (I), Satire Makes it Doubly Skewed (II), and Ideonational Skew - Satire = Epistemic Closure? (III).


I. Disruption du jour


Is the idea of disruptive innovation a useful concept, or is it largely a misapplied buzzword. In the original definition of "creative destruction", Joseph Schumpeter described a process of innovation that resembled an avalanche or an earthquake. For example, most innovations do not reshape their respective industries, but a few key innovations (born out of creative ferment) do.



The modern notion of disruptive innovation does not make the distinction between the effects of innovation in different industries, nor are all so-called "disruptions" equally as valuable. Schumpeter's model of disruptive innovation resembles a power law, while the modern conception of disruptive innovation argues that transformative changes are ubiquitous. Here are some readings on the myth and controversies surrounding the concept:

Lepore, J.   The Disruption Machine. New Yorker, June 23 (2014).

* a critique of the "disruption" industry.

Bennett, D.   The Innovator's New Clothes: Is Disruption a Failed Model? Bloomberg Businessweek, June 18 (2014).

* perhaps Lepore is right -- disruption for disruption's sake is not a viable model of economic change.

Bennett, D.   Clayton Christensen Responds to New Yorker Takedown of 'Disruptive Innovation'. Bloomberg Businessweek, June 20 (2014).


* a rebuttal to the Lepore article from the modern "disruption" guru.


II. Satire Makes it Doubly Skewed

Two (intentionally) skewed views on Evolution [1, 2]: God does not do art, and monkeys still exist. Or something like that. Anyways, here is a sampling of creationism satire from Summer 2014.

[1] Pliny the In-Between   Theistic evolution. Evolving Perspectives blog, July (2014).



[2] Why There are Still Monkey (fake book in the Dummies series). Timothy McVeins Twitter post, June 20 (2014).



III. Ideonational Skew - Satire = Epistemic Closure?


Statistical conspiracy theory? Here is a link to John Williams' Shadowstats site and (appropriately) three readings [1-3] that critique the overall approach. For example, in one reading, it is suggested that the "shadow" in the Shadowstats name consists of an inappropriate modeling methodology.



[1] Aziz   The Trouble with Shadowstats. Azizonomics, June 1 (2013).

[2] Krugman, P.   Always Inflation Somewhere. Conscience of a Liberal blog, July 19 (2014).

[3] Hiltzik, M.   A new right-wing claim: Obama must be lying about inflation. The Economy Hub, Los Angeles Times, July 23 (2014).

March 24, 2014

Official Host of Carnival of Evolution, #70


Synthetic Daisies is one again playing host to Carnival of Evolution's April edition, which goes live on April 1. This year's theme will be the games of evolution. To know exactly what this means, you will have to join us on April 1. This marks the third consecutive year of blogrolling [1]. I have been contributing posts for a bit longer). There are three ways to contribute: 1) log in to the Blog Carnival site and fill out the form, 2) post the link to the Carnival of Evolution Facebook page, or 3) e-mail me directly with “Carnival of Evolution” in the title.


Until then, here are some anatomical renderings and faux-developmental biology of alien hearts and brains [2]. A throwback to the theme of Carnival of Evolution #58: visions from the Evolutionary Future. According to the futurist George Dvorsky [3] and as implicitly suggested on Memory Alpha, we have indeed (but perhaps not ethically) won over these hearts and minds -- and brains. 


NOTES:

[2] Carrick, L.M.   Cranial, Brain, and Heart Development: Human, Andorian, Klingon, Gorn, Tellarite, Vulcan. Vibrant Oxymoron blog, July 7 (2011).

[3] Dvorsky, G.   Star Trek's 'prime directive' is stupid. Sentient Developments blog, January 13 (2007).

March 2, 2014

Fireside Science: Logical Fallacy vs. Logical Fallacy

This content is cross-posted to Fireside Science. To get the most out of this post, please review the following materials:

Alicea, B.   Informed Intuition > Pure Logic, Reason + No Information = Fallacy? Synthetic Daisies blog, January 4 (2014).

The peer-review committee for pure rationality. For more, please see [1].


Awhile back, I posted some critiques of and modifications to the conventional approach to logical fallacies [1] here on Synthetic Daisies. It seems as though every debate of the issues on the internet involves an accusation that one side is engaging in some sort of "fallacy". This is especially true of topics of broader societal relevance, where the notion of logical fallacies has become entangled with denialism [2] and epistemic closure [3].

Social Media argumentation, one person's take.

To recap (full version of the post here), I proposed that we replace six fallacies on the chart above and replace them with seven fallacies that are more inclusive of moral (e.g. emotional) and cultural biases. To me, the "Skeptic's Guide to the Universe" model feels like a 12-step program of rationality. It may help you think in a desirable way (e.g. pure rationality). However, pure rationality does not provide you with a means to place conditions on an objective argument. The triumph of logical rigor ultimately becomes a straight-jacket of the mind, reducing one's ability to think situationally.

Are the arbiters of deduction wrong on six counts?

Now it appears that I'm not alone in my concerns. Big Think now has a theme "The Fallacy Fallacy" on the fallacies of logical fallacies [4], with contributions from Alex Berezow, Julia Galef, Daniel Honan, and James Lawrence Powell


In this collection of essays and interviews, the overuse of logical fallacies itself is cited as a fallacy of composition, and provides better ways to construct arguments. These include several general observations related to the validity of reason itself. These transcend the popular "identify the fallacy" model.

One theme involves making the case for consensus through joint argumentation. Correct answers are not to be found via the most rigorous argument, but by exploring many complementary arguments, each with their own flaws.  

Another theme involves being mindful of cognitive biases such as confirmation bias or subconscious cultural preferences. Even when an argument is highly rigorous by the standards of logical consistency, they may still suffer from a lack of perspective. 

The third major theme involves the recognition that ignorance is a valid starting point [5] for many arguments. It is impossible to know everything about a topic, so any principled argument is bound to be incomplete. And the traditional fallacy model [6] is likely to make things worse.



NOTES:
[1] This is a list of 24 common logical fallacies, courtesy of Yourlogicalfallacyis.com (Jesse Richardson, Andy Smith, and Som Meadon). Also, most of these are individually found on Wikipedia with a more detailed explanation.

[2] Reinert, C.   Denialism vs. Skepticism. Institute for Ethics and Emerging Technologies blog, February 23 (2014).

[3] Cohen, P.   "Epistemic Closure"? Those are fighting words. NY Times Books, April 27 (2010).

[4] This is not a tautology! But it's not the same thing as the formal version of the fallacy fallacy (a.k.a. argumentum ad logicam).

[5] Contrast with: Argument from Ignorance. RationalWiki.

[6] A nice resource for better understanding all possible logical fallacies: The Fallacy-a-Day-Podcast. A fallacy a day, in readable and podcast form.

January 4, 2014

Informed Intuition > Pure Logic, Reason + No Information = Fallacy?

This content has been cross-posted to Tumbld Thoughts.

The peer-review committee for pure rationality. COURTESY: [1]

My notes on logical fallacies: perhaps they are not as bad as you think. People can make what are clearly errors in logic, and sometimes such fallacies are used as decision-making heuristics or cultural blends. This helps us make difficult decisions in the absence of information, or make sense of situations with little precedent.

This is like the 12-step program for skeptics and humanists (or those who aspire to these values). Much like 12-step programs, they leave a lot to be desired. These rules are largely naive of propagandist techniques, and the innate cognitive and cultural biases of their readers. The rhetoric argument does not fare with respect to this list. Fallacies on this list that I have an issue with:

1) Special pleading (and appeal to emotion): in cases where people fail to understand the context of a decision, special pleading might help to offset the damage done by a purely logical decision. Legal decisions that do not take in special cases (e.g. Grandfather clauses) are particularly of note.

2) Black-or-white: if decision-making were entirely deliberative (e.g. purely logical), we would never arrive at a decision. In this sense, decision-making must include a impulsive (or emotional) component. 

3) Ad hominem: while attacking the person rather than the argument is a convenient way to win an argument, this idea also assumes that people always argue in good faith and from a position of pure objectivity. This leaves no room for a theory of motivation, particularly when an argument has a thinly-veiled ulterior motive.

4) Slippery Slope: in cases of ambiguous moral or logical clarity, the slippery slope might actually help us clarify boundaries between one state and another. Without this boundary, human cognition is left without a reference point, which does not allow for clear (and culturally-relevant) decisions to be made.

5) Ambiguity: ambiguity is a necessary condition of a living argument. In cases where ambiguity is resolved, argument or belief/rule system becomes constricted. Allegorical arguments depend on ambiguity to remain relevant -- perhaps this is simply support for the ambiguity fallacy, but allegories are important devices in abducing (e.g. logical abduction) new logical relationships.

6) Strawman: "misrepresentation" of an argument is often in the eye of the beholder. People tend to extract heuristics in dealing with complex arguments, so it is hard to not construct a strawman (unless it is exceedingly flimsy, as with most intelligent design endeavors). 

Unless an argument is painstakingly recapitulated, any "elevator talk" length summary is bound to fail. And sometimes arguments are inherent to one's belief system -- in fact any criticism in this case could be viewed as a misrepresentation. In any case, strawman-type approaches can be used to set up improvements to an argument.

Not so fast, deduction fans......

Now here are some new fallacies that I have come up with. These are based on personal experience, both in human interactions and artificial intelligence. They are a bit more nuanced and specific than the fallacies presented in the "12-step program" model, but then again it speaks to some of my critiques above.

1) The "economic argument"/argument from efficiency: resource allocations that benefit me or my social group are superior, and can be extended to efficiency criteria. 

2) The correlative argument: things that co-occur are always significant. The problem is not discussed in the framework of complex, multivariate causality.

3) Argument from exemplar, normalization fallacy: similar to the correlative argument, but runs in the other direction. In this case, the argument is made from a single example.

In some cases, while the argument is made from extended observation, those observations do not map to the natural phenomenon well. Alternately, comparing phenomena that do not have the same underlying statistical distribution is an example of the normalization fallacy.

4) False consensus: consensus (meeting of the minds, political coalitions, peer-review) always puts you in a better place than where you started. A variant of the normalization fallacy, but involves the assumption that intellectual triangulation will solve any problem.

5) Argument from extreme relativism: when every culture is correct, no matter how morally repulsive the practice. This comes from a misunderstanding of cultural relativism: relativism is not about values, but about the intersubjectivity of cultural variants. In other words, these variants cannot be understood in isolation, only in the context of other practices.

6) Argument from moral superiority: an argument that is rooted in moral superiority (using partially or questionably factual information to intimidate). The goal of such an argument is to reform, prosletyze, or otherwise morally manipulate the intended target.

7) Highly-contingent statistic fallacy: statistics that are the most extreme in recorded history, or the first time a double play was turned in the 5th inning by a left-handed second baseman at night. The superlative is misleading, because the situation is either highly-artificial or not conducive to replication.

Different Ways of Explaining

To conclude, I will demonstrate how there are different ways of explaining. Is one specific type always superior, or is it context-dependent? Or do the abductive and deductive approaches have their own unique advantages?

Ghosts in the machine....

Who's better at explaining things, a novice with an interest and a creative mind, or an expert at really complex concepts [2]? Here, we have an example of the former (Bjork explaining how a TV works in three minutes) and the latter (Hiroshi Ishiguro and other robotics experts explaining the uncanny valley in one minute). And, of course, Spock can address both sides of the equation in one quick caption.

Robots and Humans. Theory of Mind but no context-dependence. Weird and Unnerving.

NOTES:
[1] This is a list of 24 common logical fallacies, courtesy of The Skeptics Guide to the Universe and Yourlogicalfallacyis.com (Jesse Richardson, Andy Smith, and Som Meadon). Also, most of these are individually found on Wikipedia with a more detailed explanation.

[2] For an interesting example of how readers of a magazine for statistics professionals explained the Monty Hall problem to a general audience: Reader's Challenge: the Monty Hall problem. Significance magazine, October, 32-33 (2013).


November 11, 2013

New Work on CGSs: ritual and incorporative modeling

Here are two short features (something I am calling social media posters) that I debuted on Tumbld Thoughts. Think of this as posting entries from your laboratory notebook (one of several, in my case) to social media. In this case, each entry is further development of the Contextual Geometric Structures idea. The first poster is on the exaptation of rituals from a mathematical modeling perspective, using Halloween as a seasonally-appropriate example. The second poster is on something called incorporative modeling, using my "return from Robotistan" [1] as an example.

I. Seasonally-appropriate Cultural Exaptation


Why do rituals change over time? Why do they resemble weird things? And why do people believe even weirder things about these weird things? Here is a demonstration of something I am calling ritual modeling, using Halloween as an example. 

Some elements (particularly the forgetting/ decay measurements) of this are drawn from [2]. Additional process modeling and graph with pseudo-data were done to demonstrate this idea.



II. Returning from Robotistan


Here is a counterpart to my recent post on ritual modeling. To place this in context, here are a few readings on the highly predictive parts of human behavior. In [3], the analysis of easily- tracked human behaviors (such as mobility) can lead to highly predictable patterns. In [4], Sandy Pentland from MIT discusses how big data (databases of internet behavior) allow us to predict human behavior independently of morals and values. This has the effect of uncovering behaviors (using machine learning techniques) that people are usually not straightforward about in their language or public persona.

These types of analyses are useful both in terms of understanding aggregate cultural trends and the construction of crude behavioral models. So while such approaches are highly successful at characterizing well-known behaviors, they do not consider how new information is incorporated into behavioral schemes, especially those which are not highly predictive to begin with. This requires a more purely computational (e.g. simulation-based) approach.


To address this, I introduce something called an incorporative model, which draws from earlier work on Contextual Geometric Structures (CGSs). CGSs [2] are a hybrid soft classifier/fluid dynamics-inspired computational model of cultural behavior (e.g. culturally-conditioned collective behaviors). The slides show how observations are incorporated into such models, and how this new information is shared across a population of agents.





Before we end, the video for my lecture to the BEACON Center in May is now available on the BEACON Center YouTube channel. The second part discusses the application of CGSs to understanding the evolution and dynamics of economic value.

NOTES:

[1] For more musings from my trip, please see: Alicea, B.   Fear and Loathing in Robotistan. Synthetic Daisies blog, August 20 (2013).

[2] Alicea, B.   Contextual Geometric Structures: modeling the fundamental components of cultural behavior. Proceedings of Artificial Life, 13, 147-154 (2012).

[3] Song, C., Qu, Z., Blumm, N., Barabasi, A-L.   Limits of Predictability in Human Mobility. Science, 327, 1218-1021 (2010).

[4] Pentland, S.   Predicting Customers' (Unedited) Behavior. Harvard Business Review, September 19 (2012).

August 20, 2013

Fear and Loathing in Robotistan

Do you fear your (future) robot overlords? In a recent Mashable op-ed [1], John Havens argued that we should fear the future of artificial intelligence, if only for it's propensity to get things wrong and our propensity to put too much trust in the machine's output. Another emerging theme in popular culture, from fear of the coming singularity [2] to fear of the deleterious impact robots will have on job growth [3], is something I will call robo-utopianism and robo-angst, respectively.

Ken Jennings. One man who welcomes our new robotic overlords.

Is robo-angst a general fear of the unknown? Or is it a justified response to an emerging threat? I would argue that it is mostly the former. In a previous Synthetic Daisies post critiquing futurism, I postulated that predicting the future involves both unbridled optimism and potential catastrophe. While some of this uncertainty can be overcome by considering the historical contingencies involved, the mere existence of unknowns (particularly if they involve intangibles) drive angsty and utopian impulses alike.

Both of these impulses are also based on the nature of modern robotic technology. Perhaps due to our desire to cheaply replicate a docile labor force, robots represent intelligent behavior that is ultra-logical, but not particularly human [4]. Perhaps the other aspects of human intelligence are hard to reproduce, or perhaps there is indeed something else at work here. Nevertheless, this constraint can be seen and nature of tests for sentience such as the Captcha (Turing test-like pattern recognition in context) to distinguish humans from spambots.

Examples of Captcha technology. COURTESY: captcha.net

So how do we go about achieving sentience? As robo-utopians would have it, this is the next logical step in artificial intelligence research, requiring only natural increases in the current technology platform given time. Does becoming sentient involve massive increases in the ultra-logical paradigm, massive increases in embedded context, or the development of an artificial theory of mind? And if making robots more human requires something else, do we even need to mimic human intelligence?

Perhaps part of the answer is that robots (physical and virtual) need to understand humans well enough to understand their questions. A recent piece by Gary Marcus in the New Yorker [5] posits that modern search and "knowledge" engines (e.g. Wolfram|Alpha) can do no better than chance (e.g. robo-stupidity) for truly deep, multilayered questions that involve contextual knowledge. 

When robots do things well, it usually involves the aspects of human cognition and performance that we understand fairly well, such as logical analysis and pattern recognition. Much of the current techniques in machine learning and data mining are derived from topics that have been studied for decades. But what about the activities humans engage in that are not logical? 

Example of the biological absurdity test.

One example of adding to the ultra-logical framework comes from social robotics and the simulation of emotional intelligence [6]. But animals exhibit individual cognition, social cognition, and something else which cannot be replicated simply by adding parallel processing, emotional reflexivity, or "good enough" heuristics. What's more, the "logical/systematic" and "irrational/creative" aspects of human behavior are not independent. For better or worse, the right-brained, left-brained dichotomy is a myth. For robots to be feared (or not to be feared), they must be like us (e.g. assimilated).

Examples of machine absurdity. TOP: an absurd conclusion from a collection of facts, BOTTOM: deep irony and unexpected results, courtesy of a recommender system.

Perhaps shared cultural patterns among a group of robots, or "cultural" behaviors that are nonsense from a purely logical perspective and/or traditional evolutionary perspective. Examples include: the use of rhetoric and folklore to convey information, the subjective classification of the environment, and conceptual and axiomatic blends [7]. 

How do you incorporate new information into an old framework? For humans, it may or may not be easy. If it falls within the prevailing conceptual framework, it is something humans AND robots can do fairly well. However, when the idea (or exemplar in the case of artificial intelligence) falls outside the prevailing conceptual framework, we face what I call the oddball cultural behavior problem

Take ideas that lie outside the sphere of the prevailing conceptual model (e.g. spherical earth vs. flat earth, infection vs. pre-germ theory medicine) as an example. These ideas could be viewed as revolutionary findings, ideas at odds with the status quo, or as crackpot musings [8]. The chosen point-of-view is informed either by naive theory (e.g. conceptual and axiomatic blends) or pure logical deduction. Regardless of which is used, when the number of empirical observations in a given area is largely unknown, the less tied to formal models the arguments become, and wild stories may predominate. This may explain why artificial intelligence sometimes makes nonsensical predictions, or why humans sometimes embrace seemingly nonsensical ideas.

Incorporating new information into an old framework, a.k.a. the oddball cultural behavior problem. When the idea falls well outside of the existing framework, how is it acted upon?

In some cases, oddball cultural behavior is classified using conceptual blends (or short-cuts) [9] are used to integrate information. This is similar but distinct from how heuristics are used in decision-making. In this case, cultural change (or change in larger context/structures) is regulated (implemented in a combinatorial manner) by these short-cuts. One might use a short-cut (more flexible than changing a finite number of rules) to respond to the immediate needs of the environment, but because it is not an exact response, the cultural system overshoots the optimal response, thus requiring additional short-cuts.

Moving on from what robots don't do well, some of the robo-angst is directed towards the integration of people and machines (or computation). The discussion in Haven's op-ed about Steve Mann might be understood as radically-transparent ubiquitous computing [10]. Steve Mann's experience is intriguing for the same reasons that human culture is a selectively-transparent ubiquitous framework for human cognition and survival. The real breakthroughs in autonomous intelligence in the future might only be made by incorporating radically-transparent ubiquitous computing into the design of such agents.

When tasks require intersubjective context, it is worth asking the question: which is funnier to the professional clown? A robotic comedian? Perhaps, but he's not quite skilled in the art. COURTESY: New Scientist and Dilbert comic strip.

Why would we want a robot that makes rhetorical slogans [11]? Or a robot that uses ritual to relate with other robots? Or a denialist [12] bot? Before the concurrent rise of big data, social media, and machine learning, the answer might be: we don't. After all, a major advantage of robots is to create autonomous agents that do not exhibit human foibles. Why would we want to screw that up?

However, it is worth considering that these same expert systems have uncovered a lot of aggregate human behavior that both violate our intuition [13] and are not something to be proud of. These behaviors (such as purchasing patterns or dishonesty) may not be optimal, yet they are the product of intelligent behavior all the same [14]. If we want to understand what it means to be human, then we must build robots that engage in this side of the equation. Then perhaps we may see the confluence of robo-angst and robo-utopia on the other side of the uncanny valley.

NOTES: 

[1] Havens, J.   You should be afraid of Artificial Intelligence. Mashable news aggregator, August 3 (2013).

[2] Barrat, J.   Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dune Books (2013).

[3] Drum, K.   Welcome, robot overlords. Please don't fire us? Mother Jones Magazine, May/June (2013) AND Coppola, F.   The Wastefulness of Automation. Pieria magazine, July 13 (2013).

For a fun take on this, see: Morgan R.   The (Robot) Creative Class. New York Magazine, June 9 (2013).

[4] Galef, J.   The Straw Vulcan: Hollywood's illogical appraoch to logical decisionmaking. Measure of Doubt Blog, November 26 (2011).

[5] Marcus, G.   Why can't my computer understand me? New Yorker Magazine, August 16 (2013).

For a take on recommender systems and other intelligent agents gone bad (e.g. the annoying valley hypothesis), please see: Moyer, B.   The Annoying Valley. EE Journal, November 17 (2011).

[6] Dautenhahn, K., Bond, A.H., Canamero, L., Edmonds, B.   Socially Intelligent Agents. Kluwer (2002).

[7] Fauconnier, G. and Turner, M.   The Way We Think: Conceptual Blending And The Mind's Hidden Complexities. Basic Books (2013) AND Sweetser, E.   Blended spaces and performativity. Cognitive Linguistics, 11(3-4), 305-334 (2000).

[8] For an example of oddball and potentially crackpot ideas in science, please see: Wertheim, M.   Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything. Walker & Company (2011) AND Horgan, J.   In Physics, telling cranks from experts ain't easy. Cross-Check blog, December 11 (2011).


[9] Edgerton, R.B.   Rules, Exceptions, and Social Order University of California Press, Berkeley (1985).

[10] For an interesting take on Steve Mann's approach to Augmented Reality and its social implications, please see: Alicea, B.   Steve Mann, misunderstood. Synthetic Daisies blog, July 18 (2012).

[11] Denton, R.E.   The rhetorical functions of slogans: Classifications and characteristics. Communication Quarterly, 28(2), 10-18 (1980).


[13] For an accessible review, please see the following feature and book: 

Lohr, S.   Sizing up Big Data, Broadening Beyond the Internet. Big Data 2013 feature, New York Times Bits blog, June 19 (2013).

Mayer-Schonberger, V. and Cukier, K.   Big Data: A Revolution That Will Transform How We Live, Work, and Think. Houghton-Mifflin (2013).

[14] Similar types of behaviors (e.g. the Machiavellian Intelligence hypothesis) can be seen in non-human animal species. For classic examples from monkeys, please see: Byrne, R.W. and Whiten, A.   Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press (1989). 

August 6, 2013

Thought (Memetic) Soup, August edition

Here is the latest set of loosely-related features from my micro-blog, Tumbld Thoughts. Another installment of the Thought (Memetic) Soup concept I debuted last month. This post features topics such as Discovering Scaling Laws in Human Innovation (I), Argument to Social Moderation (II), and Towards a Subjective Information Theory (III). A set of creative and edgy (at least from an academic perspective) posts at the intersection of human culture, technology, and complexity theory.

I. Discovering Scaling Laws in Human Innovation

Here is a feature from IEEE Spectrum [1] on Santa Fe Institute's Performance Curve Database (PCDB). The picture above is from the IEEE Spectrum article, and features four technologies (dynamic RAMDNA sequencingphotovoltaics, and wind power) that conform to Moore's Law (e.g. get cheaper, smaller, better over time) [2].


The Moore's Law-like linear scaling in these four technologies is not without variation, and this variation [3] is related to platform changes, material availability, and other factors. The picture below is from the PCDB website, and features raw data behind the trend in pricing decreases for wind power over time [4]. 



II. Argument to (Social) Moderation


Here are a few images, thoughts, and readings related to the concepts of the political "center" and the organizational power of "majority rules", discussed in a recent Washington Post article [5]. Are there inherent cognitive tendencies that define left, right, and center, or do ideas get sorted by cultural and historical context (e.g. maintenance of binary oppositions)


Perhaps a better question is whether or not the "center" is the best possible political philosophy, given that it is often the product of consensus and compromise. This is of course the idea behind radical centrism, but also consistent with arguing to moderation (a well-known logical fallacy). 

However, moving too far in one direction or another away from the center also poses a number of problems. The psychology of extremism suggests that while leftist and rightist tendencies may be rooted in psychological tendencies [6], extremism itself may be the product of general cognitive biases [7].


There is also a more interesting set of relationships between the right-left dichotomy and a dichotomy between strict majoritarianism-radical pluralism philosophies. While majoritarianism is the standard in most democracies, it often fails in culturally diverse societies [8]. The alternative would be pluralism, but pluralism may or may not be consistent with the principles of pragmatism (e.g. centrism).


III. Towards a Subjective Information Theory


This is a rose, this is not a pipe, and there is no spoon. What does this mean for me [9]? Aside from the possibility matrix shown above, Rodney Needham [10] introduces us to cultural subjectivity by describing Wittgenstein's triangle. This [11] can be defined as objects that do not have a genuinely proper description. Consider a rose as an example:

1) this is a rose (object recognition).

2) this rose has five pedals (classification).

3) this rose is red (color perception).

4) this rose smells sweet (olfactory perception).

5) this rose is beautiful (subjective judgement).

These statements can be grouped (1, 2, 3-4, and 5) with respect to increasing amounts of subjectivity. Such increasing subjectivity leads to both intra- and inter-cultural diversity. This suggests that something called subjective information theory (SIT) might be possible [12].

In SIT (which is a hypothetical theory), subjectivity would increase as the configurational entropy (e.g. a greater number of possibilities) of the imagination increases. SIT would also require some form of data fusion [13], which would combine the possible answers for each statement above.

NOTES:

[1] Elert, E.   Tech Trajectories: four more Moore's Laws. IEEE Spectrum, July 26 (2013).

[2] Moore's law was originally applied to semiconductor size, power and pricing with respect to time. This linear scaling is dependent upon a certain device physics and material composition (and nothing else).

[3] these include changes in the linear function (e.g. plateaus) or the rate of change (e.g. doubling time).

[4] there is a wealth of performance curve data for many different technologies at the website.


[5] Klein, E.   There's no such thing as "the center". Washington Post, July 25 (2013). Cartoon (middle image) is a KAL cartoon (courtesy of The Economist).

[6] Mitroff, I.I.   The Psychology of Extremism. HuffPo blog, August 30 (2011).

[7] Fernbach, P.M., Rogers, T., Fox, C.R., and Sloman, S.A.   Political Extremism Is Supported by an Illusion of Understanding. Psychological Science, 24(6), 39-946 (2013).

[8] Editorial   Majoritarianism: Zombie democracy. Economist, June 22 (2013).

[9] the rose reference is featured in note [10]. "This is not a pipe" is in reference to Magritte's "Treachery of Images". "There is no Spoon" is a reference from "The Matrix" (the first movie).

[10] Needham, R.   Against the tranquility of axioms. University of California Press, Berkeley (1983).


[12] One version of subjective information is discussed in Chapter 5: Jumarie, G.   Relative Information. Springer Series in Synergetics, 47 (1990). In this reference, standard Shannon entropy is modified to account for symbols and meanings (semantics).

Another version can be found here: Tononi, G.   Phi: a voyage from the brain to the soul. Random House (2012). Here, information theory is used to account for neuronal diversity and the variety of mental states in conscious experience.

[13] For more information, please see: Bouchon-Meunier, B.  Aggregation and Fusion of Imperfect Information. Springer-Verlag, Berlin (1998).

Printfriendly