P.S. If you are interested in the generation of virtual sound, take a look at the work of Douglas James at Cornell.
December 27, 2010
Paper of the Week (Computational Physics of Film)
P.S. If you are interested in the generation of virtual sound, take a look at the work of Douglas James at Cornell.
December 23, 2010
Recursive algorithmism: a religion for the other .005 of us
In light of the recently released Tron sequel, I decided to start a new religion called Recursive Algorithmism.
Just check out these testimonials:
"One of the great things about our religion is that there is that repentance takes the form of a for loop". Anonymous.
"In the year 1954, brother Turing traversed a strict hierarchy for 40 days and 40 nights. After he reached the tip of the tree branch, he looked out over a sparse array and saw a lone pixel with a value of "1". This is the rock of our church". Anonymous.
"And on the eighth clock cycle, the program halted. Holy is that instance." Anonymous.
"We give the phrase 'Ghost in the Machine' a whole new meaning". Anonymous.
Just check out these testimonials:
"One of the great things about our religion is that there is that repentance takes the form of a for loop". Anonymous.
One of our many planned churches (image to scale).
"In the year 1954, brother Turing traversed a strict hierarchy for 40 days and 40 nights. After he reached the tip of the tree branch, he looked out over a sparse array and saw a lone pixel with a value of "1". This is the rock of our church". Anonymous.
"And on the eighth clock cycle, the program halted. Holy is that instance." Anonymous.
"We give the phrase 'Ghost in the Machine' a whole new meaning". Anonymous.
Potential Pixellated Practitioner/Prosletyzer
December 22, 2010
Surrealism of the Month II
The definition of surrealism is......a "Butthole Surfers" album for the masses. Really, take your pick (the VH1 airtime [1] or the Lady Gaga video suggested viewing [2]) on this one.......
December 11, 2010
Jeff Hawkins, HTM, and "intelligence"
Jeff Hawkins (the theoretical neuroscientist/mobile computing pioneer) recently gave a lecture at the Beckman Institute on his work at Numenta on Hierarchical Temporal Memory (HTM). Actual title: "Advances in Modeling Neocortex and Its Impact on Machine Intelligence". A video of the lecture can be viewed here.
Basically, Jeff is proposing a new paradigm for thinking about brains and technology. With the advent of "soft" computing techniques (e.g. evolutionary algorithms, neural networks), bio-inspired software, and new techniques to peer into the brain (e.g. fMRI/EEG and fNIR) we need a new way to both produce machine intelligence and theoretically understand what is going on in the brain. The fact that he makes this link, and has been interested in this for most of his career automatically makes me a fan.
Yet while I like Jeff Hawkins (I basically bought into the argument he laid out in "On Intelligence"), I do not agree with some of the details featured in this talk (although the work is technically impressive and correct). Mainly the idea that neocortex (the 6-layered tissue responsible for much of mammalian higher cognition more properly called isocortex) is computationally powerful because it has a repetitive structure.
I have encountered this idea in a number of computational neuroscience papers. I guess my objection is to the idea of repetitive structures being limited to the neocortex, and that the neocortex defines intelligence. This is incorrect on two counts:
1) there are other structures (cerebellum, parts of the medial temporal lobe) which also exhibit repetition. It's not that these structures do not produce intelligent behavior. In fact, the cerebellum is known for movement and other behavioral regulation, while the medial temporal lobe is known to be involved in memory consolidation and spatial navigation. The problem is that Hawkins all too often equates repetition of structure with pattern recognition and predictive capacity. It may work when running HTM simulations, but is it biologically accurate and ultimately robust? While this is certainly true of visual cortex, it is not true of all neocortical regions. There are other attributes such as convergence and higher-order feedback that exploit this repetitive, hierarchical structure that do not require nor preclude pattern recognition.
2) birds use pallial-derived structure to generate intelligent behavior. While one could argue that this structure is also hierarchical (it is certainly layered), it does not share many of the design principles found in mammalian neocortex. The neural substrate of insects, who can likewise generate complex behaviors, is also not equivocal to the mammalian neocortex. While hierarchical processing may also exist in avian pallium and insect neuropil/ganglia networks, it may or may not be consistent with Hawkins' HTM.
The other problem I have with current artificial intelligence research (and machine learning in general) is the focus on pattern recognition. While pattern recognition may be a necessary condition for intelligence, it is not the only hallmark of intelligence. To his credit, Hawkins argues that prediction is actually the hallmark of intelligent behavior. This is much more powerful than blind pattern recognition, which can produce a lot of false positives (e.g. seeing
an image of the virgin Mary on the side of a barn). The ability to predict upcoming events in the environment may not only define intelligence in the brain (neuronal populations), but among cell and organismal populations as well.
Yet there may be ways to define intelligent behavior outside the realm of prediction. For several years now (since the early years of my PhD studies), I have been fascinated by sensory integration and signal convergence in the brain. For example, perception of a coffee mug being lifted, brought to the mouth, and set down again involves visual, auditory, and tactile cues -- all of which need to be integrated in the course of producing the intelligent behavior of consciousness we all take for granted. There are centers in the brain (e.g. superior colliculus) in which single neurons will integrate inputs of different sensory types, and depending on how they are weighted, will produce either an additive, suppressive, or superadditive response. The superadditive response is the outcome that has intrigued me the most, as taken across cells could produce a very complex (and fascinating) emergent phenomenon. And, like it or not, this may produce intelligent behavior with no direct connection to prediction nor pattern recognition.
Further Reading:
Anastasio, T.J., Patton, P.E., and Belkacem-Boussaid, K. (2000). Using Bayes' rule to model multisensory enhancement in the superior colliculus. Neural Computation, 12, 1165-1187.
Ernst, M.D. and Banks, M.S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429-433.
Floreano, D. and Mattiussi, C. (2008). Bio-inspired Artificial Intelligence. MIT Press, Cambridge, MA.
Hawkins, J. and Blakeslee, S. (2004). On Intelligence. Times Books, New York.
Jehee, J.F.M. and Murre, J.M.J (2008). The scalable mammalian brain: emergent distributions of glia and neurons. Biological Cybernetics, 98(5), 439-445.
Jarvis, E.D. et.al (2005) Avian brains and a new understanding of vertebrate brain evolution. Nature Review Neuroscience, 6(2), 151-159.
Meredith, M.A. and Stein, B.E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science, 221(4608), 389-391.
Richards, W. (1988). Natural Computation. MIT Press, Cambridge, MA.
Shadmehr, R. and Wise, S.P. (2005). Computational Neurobiology of Reaching and Pointing. MIT Press, Cambridge, MA.
Shasha, D.E. and Lazere, C. (2010). Natural computing: DNA, quantum bits, and the future of smart machines. W.W. Norton, New York.
Stein, B.E. and Meredith, M.A. (1993). The merging of the senses. MIT Press, Cambridge, MA.
Stein, B.E. (1998). Neuronal mechanisms for synthesizing sensory information and producing adaptive behaviors. Experimental Brain Research, 123, 124-125.
Strausfeld, N.J. et.al (1998). Evolution, Discovery, and Interpretations of Arthropod Mushroom Bodies. Learning and Memory, 5, 11-37.
Basically, Jeff is proposing a new paradigm for thinking about brains and technology. With the advent of "soft" computing techniques (e.g. evolutionary algorithms, neural networks), bio-inspired software, and new techniques to peer into the brain (e.g. fMRI/EEG and fNIR) we need a new way to both produce machine intelligence and theoretically understand what is going on in the brain. The fact that he makes this link, and has been interested in this for most of his career automatically makes me a fan.
Yet while I like Jeff Hawkins (I basically bought into the argument he laid out in "On Intelligence"), I do not agree with some of the details featured in this talk (although the work is technically impressive and correct). Mainly the idea that neocortex (the 6-layered tissue responsible for much of mammalian higher cognition more properly called isocortex) is computationally powerful because it has a repetitive structure.
I have encountered this idea in a number of computational neuroscience papers. I guess my objection is to the idea of repetitive structures being limited to the neocortex, and that the neocortex defines intelligence. This is incorrect on two counts:
1) there are other structures (cerebellum, parts of the medial temporal lobe) which also exhibit repetition. It's not that these structures do not produce intelligent behavior. In fact, the cerebellum is known for movement and other behavioral regulation, while the medial temporal lobe is known to be involved in memory consolidation and spatial navigation. The problem is that Hawkins all too often equates repetition of structure with pattern recognition and predictive capacity. It may work when running HTM simulations, but is it biologically accurate and ultimately robust? While this is certainly true of visual cortex, it is not true of all neocortical regions. There are other attributes such as convergence and higher-order feedback that exploit this repetitive, hierarchical structure that do not require nor preclude pattern recognition.
2) birds use pallial-derived structure to generate intelligent behavior. While one could argue that this structure is also hierarchical (it is certainly layered), it does not share many of the design principles found in mammalian neocortex. The neural substrate of insects, who can likewise generate complex behaviors, is also not equivocal to the mammalian neocortex. While hierarchical processing may also exist in avian pallium and insect neuropil/ganglia networks, it may or may not be consistent with Hawkins' HTM.
The other problem I have with current artificial intelligence research (and machine learning in general) is the focus on pattern recognition. While pattern recognition may be a necessary condition for intelligence, it is not the only hallmark of intelligence. To his credit, Hawkins argues that prediction is actually the hallmark of intelligent behavior. This is much more powerful than blind pattern recognition, which can produce a lot of false positives (e.g. seeing
an image of the virgin Mary on the side of a barn). The ability to predict upcoming events in the environment may not only define intelligence in the brain (neuronal populations), but among cell and organismal populations as well.
Yet there may be ways to define intelligent behavior outside the realm of prediction. For several years now (since the early years of my PhD studies), I have been fascinated by sensory integration and signal convergence in the brain. For example, perception of a coffee mug being lifted, brought to the mouth, and set down again involves visual, auditory, and tactile cues -- all of which need to be integrated in the course of producing the intelligent behavior of consciousness we all take for granted. There are centers in the brain (e.g. superior colliculus) in which single neurons will integrate inputs of different sensory types, and depending on how they are weighted, will produce either an additive, suppressive, or superadditive response. The superadditive response is the outcome that has intrigued me the most, as taken across cells could produce a very complex (and fascinating) emergent phenomenon. And, like it or not, this may produce intelligent behavior with no direct connection to prediction nor pattern recognition.
Further Reading:
Anastasio, T.J., Patton, P.E., and Belkacem-Boussaid, K. (2000). Using Bayes' rule to model multisensory enhancement in the superior colliculus. Neural Computation, 12, 1165-1187.
Ernst, M.D. and Banks, M.S. (2002). Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429-433.
Floreano, D. and Mattiussi, C. (2008). Bio-inspired Artificial Intelligence. MIT Press, Cambridge, MA.
Hawkins, J. and Blakeslee, S. (2004). On Intelligence. Times Books, New York.
Jehee, J.F.M. and Murre, J.M.J (2008). The scalable mammalian brain: emergent distributions of glia and neurons. Biological Cybernetics, 98(5), 439-445.
Jarvis, E.D. et.al (2005) Avian brains and a new understanding of vertebrate brain evolution. Nature Review Neuroscience, 6(2), 151-159.
Meredith, M.A. and Stein, B.E. (1983). Interactions among converging sensory inputs in the superior colliculus. Science, 221(4608), 389-391.
Richards, W. (1988). Natural Computation. MIT Press, Cambridge, MA.
Shadmehr, R. and Wise, S.P. (2005). Computational Neurobiology of Reaching and Pointing. MIT Press, Cambridge, MA.
Shasha, D.E. and Lazere, C. (2010). Natural computing: DNA, quantum bits, and the future of smart machines. W.W. Norton, New York.
Stein, B.E. and Meredith, M.A. (1993). The merging of the senses. MIT Press, Cambridge, MA.
Stein, B.E. (1998). Neuronal mechanisms for synthesizing sensory information and producing adaptive behaviors. Experimental Brain Research, 123, 124-125.
Strausfeld, N.J. et.al (1998). Evolution, Discovery, and Interpretations of Arthropod Mushroom Bodies. Learning and Memory, 5, 11-37.
December 7, 2010
Aliasing vs. higher-dimensionality: a general question
I just got into the show "Time Warp" (a Discovery Channel creation) on DVD. The basic idea is that a scientist and a high-speed camera expert get together and film various processes, such as a Mentos and Coke explosion or putting things in a blender. The interesting part is when they film it at 1000 frames per second, and then play it back in super slow motion.
Profile of "Time Warp" on Wikipedia
This got me thinking about our understanding of everyday processes. For example, in "Time Warp", the extra fast video allows us to see "hidden" aspects of a process. Cracking an egg and capturing it at 2000 Hz reveals some interesting dynamics indeed.
Egg cracking at 2000 fps
There is a rich history in the biomechanics community of recording motion (either with motion sensors or video) at high sampling rates. These high sampling rates have become possible with advances in technology, so that the ability to record at 1000 (or even 10,000) frames per second is becoming increasingly cheap and portable.
Also keep in mind that there exists a concept called aliasing which places some constraints on how we sample a given process
Definition of aliasing from Wikipedia
The most relevant aspect of aliasing to this discussion is the issue of oversampling which can lead to distortion. On the Wikipedia page above, the author has provided some examples of aliased images. A more intuitive version of aliasing is if you were to put a marker on a bicycle wheel and spin it at a high speed. The card would appear to first hover in place, and then drift in the reverse direction of the spin.
So I wonder: does observation at ultra-high speeds (an experimental camera exists that can capture motion at 1,000,000 frames per second) reveal new, higher-dimensional modes of the process, or does it lead to aliasing at some point? For example, in arm motion, there are higher-dimensional derivatives of position called jerk, snap, crackle, and pop. Can we capture higher-dimensional motion such as this just by implementing higher resolution measurement devices, or is there an upper limit to our observational ability?
Profile of "Time Warp" on Wikipedia
This got me thinking about our understanding of everyday processes. For example, in "Time Warp", the extra fast video allows us to see "hidden" aspects of a process. Cracking an egg and capturing it at 2000 Hz reveals some interesting dynamics indeed.
Egg cracking at 2000 fps
There is a rich history in the biomechanics community of recording motion (either with motion sensors or video) at high sampling rates. These high sampling rates have become possible with advances in technology, so that the ability to record at 1000 (or even 10,000) frames per second is becoming increasingly cheap and portable.
Also keep in mind that there exists a concept called aliasing which places some constraints on how we sample a given process
Definition of aliasing from Wikipedia
The most relevant aspect of aliasing to this discussion is the issue of oversampling which can lead to distortion. On the Wikipedia page above, the author has provided some examples of aliased images. A more intuitive version of aliasing is if you were to put a marker on a bicycle wheel and spin it at a high speed. The card would appear to first hover in place, and then drift in the reverse direction of the spin.
So I wonder: does observation at ultra-high speeds (an experimental camera exists that can capture motion at 1,000,000 frames per second) reveal new, higher-dimensional modes of the process, or does it lead to aliasing at some point? For example, in arm motion, there are higher-dimensional derivatives of position called jerk, snap, crackle, and pop. Can we capture higher-dimensional motion such as this just by implementing higher resolution measurement devices, or is there an upper limit to our observational ability?
November 18, 2010
Amazon's long arm
This story looks interesting (not that I'm an aspiring filmmaker, but the concept is what caught me eye):
Amazon's overture to/competition for filmmakers
Amazon has impressed me with their interest in cloud computing and areas not typically associated with e-commerce. Next up, opportunities for scientific research at sciresearch.amazon.com?
Amazon's overture to/competition for filmmakers
Amazon has impressed me with their interest in cloud computing and areas not typically associated with e-commerce. Next up, opportunities for scientific research at sciresearch.amazon.com?
November 17, 2010
Free Artificial Life
The proceedings of the Artificial Life XI conference are available to download for free from MIT Press:
Link
This is a first for these proceedings. Have a look.
October 10, 2010
Surrealism of the Month I
August 15, 2010
Research on alternative forms of funding, some thoughts
With funding for research very tight (form governmental, foundational, and corporate sources) and my interest in alternative sources of funding, I decided to post some of my casual investigations into a new model for funding scientific inquiry and technological development.
Recently, I saw an interview with Nathan Myhrvold on Charlie Rose (a PBS show). Mr. Myhrvold, a former Microsoft employee, has created a company called Intellectual Ventures . Intellectual Ventures is an company that helps innovators file patents and extract monetary return from their investments in time and research. In the research that I have done, this type of arrangement has a lot of potential for academic pursuits. The goal is to translate ideas and (engineering) artifacts into a profitable enterprise. Overall, I like the mission of Intellectual Ventures. Yet patents are not the only means to extract profit from the landscape of ideas.
The patent system has at least three drawbacks to helping along modern innovation. Firstly, the initial cost of filing a patent is quite high. In University settings, this is mitigated somewhat, but still poses a problem for when multiple patents are sought. The costs add up, particularly if these patents ultimately bring in little money. Secondly, there is currently a backlog of applications at the patent office, which increases the time between initial submission and the awarding of the patent. Finally, patents are usually the result of research funded by one of the federal funding agencies. Therefore, extracting value from intellectual activity is
a drawn-out process that might benefit from the ability to extract value at the various intermediate steps.
One answer to some of these challenges might be found in project funding sites such as Kickstarter . Although Kickstarter is for activities with a more concrete endproduct than typical scientific research, it provides a means to raise small amounts of funds for initial and piecemeal advances.
I wrote a white paper earlier this year on establishing something similar to this before I started paying attention to Kickstarter (NOTE: my idea differs in that it directly supports the unique environment of scientific and other academic work). My future plans are to try and work with Kickstarter to develop small-scale projects that don't have concrete or "tangible" financial outcomes.
I will be posting more on this topic as I develop these ideas further.
Recently, I saw an interview with Nathan Myhrvold on Charlie Rose (a PBS show). Mr. Myhrvold, a former Microsoft employee, has created a company called Intellectual Ventures . Intellectual Ventures is an company that helps innovators file patents and extract monetary return from their investments in time and research. In the research that I have done, this type of arrangement has a lot of potential for academic pursuits. The goal is to translate ideas and (engineering) artifacts into a profitable enterprise. Overall, I like the mission of Intellectual Ventures. Yet patents are not the only means to extract profit from the landscape of ideas.
The patent system has at least three drawbacks to helping along modern innovation. Firstly, the initial cost of filing a patent is quite high. In University settings, this is mitigated somewhat, but still poses a problem for when multiple patents are sought. The costs add up, particularly if these patents ultimately bring in little money. Secondly, there is currently a backlog of applications at the patent office, which increases the time between initial submission and the awarding of the patent. Finally, patents are usually the result of research funded by one of the federal funding agencies. Therefore, extracting value from intellectual activity is
a drawn-out process that might benefit from the ability to extract value at the various intermediate steps.
One answer to some of these challenges might be found in project funding sites such as Kickstarter . Although Kickstarter is for activities with a more concrete endproduct than typical scientific research, it provides a means to raise small amounts of funds for initial and piecemeal advances.
I wrote a white paper earlier this year on establishing something similar to this before I started paying attention to Kickstarter (NOTE: my idea differs in that it directly supports the unique environment of scientific and other academic work). My future plans are to try and work with Kickstarter to develop small-scale projects that don't have concrete or "tangible" financial outcomes.
I will be posting more on this topic as I develop these ideas further.
June 12, 2010
Review of "Bursts"
Review of Barabasi, A-L. (2010). Bursts: the hidden pattern behind everything we do. Dutton Press, New York.
Introduction
If
you recollect your daily routine, what adjective would you use to describe it?
What did you do today and in what order did you do these things? You probably
engaged in an automatic routine, with many events being the same and some of
those events occurring at roughly the same time from day to day. You also
probably do things at the same time as your neighbor. Part of this is based on the
way society is organized. Indeed, people tend to have similar schedules or
engage in the same essential activities. But an alternative hypothesis suggests
that statistical laws govern aggregate human (and natural) behavior. When
processes such as travel across human transportation networks or e-mail
correspondences unfold over time, they do so in a statistically distinct
manner. In his previous book “Linked”, Barabasi got ahead of the social
networking curve to demonstrate the power of connectedness to a general
audience. Yet connectivity has two components: the static topology, and the
dynamic, less-understood process of connection. In "Bursts", the
argument is made that dynamic behavioral patterns (such as connecting to a network)
can be characterized using a series of non-uniform statistical distributions.
These models reveal that traffic jams, long check-out lines, and even crime
waves are indeed not unpredictable events, but rather can be understood as
"bursts" that occur at relatively infrequent intervals. Yet the very
nature of their burstiness (in that they involve synchronized, collective
behavior) makes them predictable.
Your
daily schedule might involve brushing your teeth at a different time than your
neighbor, but going to work at the same time. This is one kind of behavioral
burst that is featured in Barabasi’s book. But we can also think of bursts as
excitable events that occur against either a random background. We are all
familiar with the excitability exhibited by exploding fireworks, popping
popcorn, or even neurons firing an action potential. All of these events have
one thing in common: nonlinear behavior governed by a threshold. In the case of
both popcorn and neurons, a constant stimulus is applied that eventually
triggers a change in state. In the case of both popcorn and neurons, these
bursts only become useful in the context of collective behavior (one is
immensely enjoyable, the other essential to your survival). Physicists
sometimes refer to this type of response as a first-order phase transition. In
the classic sandpile model of Bak, Tang, and Weisenfeld, the gradual growth of a sandpile
is sometime punctuated by large-scale displacements in the structure of the
sandpile. These large-scale displacements occur at low frequencies relative to
the more uniform small-scale displacements. In many cases, the distribution of
these events over time can be scaled to a power law (or 1/f) distribution,
meaning that they are fundamentally distinct from a uniform diffusive process resulting in a Gaussian distribution of events. Any good book in statistical physics can put
the significance of 1/f processes in context in addition to providing a wealth
of specific examples in nature. But how do these well-characterize physical
processes map to and help explain human behavioral bursts?
An Underlying Mechanism
One
undercurrent of this book is that excitable events, synchronized, collective
behavioral processes, and power law behavior are all part of the same subject.
This subject is the manner in which bursts unfold, which is according to a
Poisson process. While excitable events, collective behavioral processes, and
power law behavior all set the stage for bursty events, it is the Poisson
process that distributes these events stochastically with respect to time. In addition, power law behavior should be expected of a Poisson process that can be observed at all timescales. The
investigation of e-mail archives demonstrates this: most e-mail arrives at
specific times of day or on specific days of the week, with smaller clusters or
singleton events occurring in the interim. The difference between e-mail
archives and physical avalanches, however, is instructive. In the case of the
former (e-mail), the "bursts" are due to large-scale events driven by
intention. In the case of the latter (physical avalanche), the
"bursts" are due to large-scale events driven by a buildup of forces
that exceeds a threshold. In both cases, linear inputs act collectively to
produce a nonlinear output. One could argue that the "flash crash" of May, 2010 was caused by the bursty nature of the stock market, a system that
phase-space-wise resides in between the collective intention of e-mail
communication and the pure stochasticity of physical avalanches.
Barabasi
writes "Bursts" as a hybrid historical/scientific narrative: he
switches back and forth between medival Hungarian history and contemporary
scientific stories as to how bursts can be discovered in the data we all
produce. By the data we all produce, I mean that his examples focus on social
phenomena such as e-mail transactions and human mobility patterns. Besides his
own work, the book also features the work of Dirk Brockmann, a physicist who
did an experiment tracking the movement of dollar bills around the United States.
The outcome was that dollar bills diffuse according to a Levy process, or a 1/f
form of diffusion. In a Levy flight, which has also been observed for albatross
foraging behaviors, short and randomly oriented trips are punctuated by rare,
long-distance trips. As with e-mail communications, these bursts are driven in
part by intention. It is only when these intentions are joined to chance events
and then placed into the context of a process that unfolds over time that the
burstiness of behavior becomes apparent.
Conclusion
But
what can be learned from the sojourns into history? There is a strong
undercurrent of historical contingency in the stories Barabasi selects.
Intentional or not, the point is made as to the "burstiness" of how
history unfolds. Historical contingency, which refers to the dependence of
current events on the trajectory of past events, is not a purely a
deterministic process. While the path history takes is constrained by past
events, those defining events occur in a background of chance events. Often
when people tell stories about how they got a particular job or how they met a
spouse, it often involves the phrase "as luck would have it". But
perhaps bursts, made manifest by dating websites, commuter trains, and scheduling
constraints, are more responsible for these chance events than has
traditionally been recognized.
May 22, 2010
Pac-Man as Google Doodle
Logged onto the internet this morning, and found the Pac-Man Doodle, celebrating the 30th anniversary of the famous video game.
Be careful, it's interactive (and addictive)! The embedded game sounds took me by surprise. Watch it again and again here.
EASTER EGG: If you click on the "Insert Coin" button twice during the same game, Ms. Pac Man appears alongside Pac-Man. You will need to use a second player controller to move the other Pac-person, though.
Be careful, it's interactive (and addictive)! The embedded game sounds took me by surprise. Watch it again and again here.
February 10, 2010
Nature's method of the year, 2009 - iPS
Here is an excellent video from the Nature publishing group on their method of the year for 2009, induced pluripotent (iPS) cells. Follows are some select screenshots from the video (available on YouTube) with annotation:
How can we reproduce the properties of embryonic stem cells cheaply and somewhat efficiently?
By delivering just four factors (via retroviral vector), we can convert somatic cells into pluripotent ones. While not every exposed cell converts, we can convert enough cells to form self-sustaining colonies.
While the goal is to make fully pluripotent cells for experimentation and perhaps even therapy, cells may also be hijacked (e.g. moved towards a pluripotent fate and then differentiated into another somatic cell type). For example, neurons might be made from fibroblasts in this manner. While not a common technique, one round of infection can produce many cells that are partially reprogrammed (that have some but not all the traits of a truly pluripotent cell).
February 1, 2010
Advances in Neuroengineering
A recent event put on by the IEEE EMBS focused on recent advances in Neuroengineering, or the interfacing of the nervous systems. Neuroengineering is an emerging approach to treating disabilities, understanding the brain, and building closed-loop control systems for brain-machine computer applications. Here are the sessions (on IEEE.tv):
Advances in neuroengineering
Understanding and treating conditions of the brain
One interesting topic brought up at these session was "microsleeps". Using advanced monitoring techniques, the presence of signatures akin to attentional lapses have been discovered. Microsleeps are sleep states that last on very short time scales. The presence of microsleeps could only be detected using electrophysiological techniques. This might be useful in predicting the onset of apnea episodes, or instances when a driver or heavy machine operator is about to deficus from their
task.
There are also therapeutic applications of these methods. The most intriguing would be to couple the real-time monitoring of muscle and brain activity with cutting edge gene and cell therapy treatments. The videos are worth the watch.
Advances in neuroengineering
Understanding and treating conditions of the brain
One interesting topic brought up at these session was "microsleeps". Using advanced monitoring techniques, the presence of signatures akin to attentional lapses have been discovered. Microsleeps are sleep states that last on very short time scales. The presence of microsleeps could only be detected using electrophysiological techniques. This might be useful in predicting the onset of apnea episodes, or instances when a driver or heavy machine operator is about to deficus from their
task.
There are also therapeutic applications of these methods. The most intriguing would be to couple the real-time monitoring of muscle and brain activity with cutting edge gene and cell therapy treatments. The videos are worth the watch.
January 25, 2010
Roosting on Structures
Last week I observed some birds above the University power plant smoke stack. I ended up making a video of their flight pattern. My interest in this stems from the widespread use of bird flocks as a model for understanding collective flocking (an instance of emergent behavior), in addition to some work I have done on modeling roosting behaviors among birds [1]. They were orbiting the power plant in a toroidal pattern, stretching out their orbit in slightly different directions on every cycle.
Image of birds circling the MSC smokestack, Michigan State University.
Image of birds circling the MSC smokestack, Michigan State University.
On one day, I saw them make a few cycles in one direction, then make a loop to the outside of the orbit in order to change direction. On another day, then decided to roost after making their series of orbits.
[1] this is available as a paper and talk presented at Understanding Complex Systems, 2006 (Champaign-Urbana, IL).
UPDATE (12/10/2013): the paper has been accepted and is now available on the bioRxiv.
Alicea, B. Filling up the tree: considering the self-organization of avian roosting behavior. bioRxiv, doi:10.1101/000349.
[1] this is available as a paper and talk presented at Understanding Complex Systems, 2006 (Champaign-Urbana, IL).
UPDATE (12/10/2013): the paper has been accepted and is now available on the bioRxiv.
Alicea, B. Filling up the tree: considering the self-organization of avian roosting behavior. bioRxiv, doi:10.1101/000349.
January 10, 2010
Dynamics Days Wednesday, Thursday overview
Sorry for the late post, but I haven't been able to get around to this posting until today. The half for Dynamics Days was almost as good as the first half. The first talk of Wednesday was by Steve Strogatz, who introduced us to a phenomenon called chimera spiral waves. The interesting thing about these waves is that they are spiral-like with discrete color-bands in the outer part of the spiral, but are disordered and pixellated in the inner part of the spiral. There is an arXIv paper on the topic if you are interested in learning more.
Other topics covered were computational neuroscience, chaos in boolean networks, and complex networks. There was a presentation on so-called "neural avalanches" , where discharges of electrical activity in the brain exhibit power-law behavior in time. Some of the complex networks speakers focused on an idea called "community structure", where the goal is to find the structure within and between groups in a network. A related topic of interest was the finding the "giant component" in networks first proposed by Erdos and Renyi.
Overall, excellent conference. The next Dyanmics Days (2011) is supposed to be at Duke University and should be quite interesting.
Other topics covered were computational neuroscience, chaos in boolean networks, and complex networks. There was a presentation on so-called "neural avalanches" , where discharges of electrical activity in the brain exhibit power-law behavior in time. Some of the complex networks speakers focused on an idea called "community structure", where the goal is to find the structure within and between groups in a network. A related topic of interest was the finding the "giant component" in networks first proposed by Erdos and Renyi.
Overall, excellent conference. The next Dyanmics Days (2011) is supposed to be at Duke University and should be quite interesting.
January 8, 2010
Eureqa!
Hod Lipson's group at Cornell has recently released a new data mining tool based on last year's paper "Distilling Free-Form Natural Laws from Experimental Data". The package is called Eureqa, and can be downloaded from here. More later.....
January 5, 2010
Dynamics Days, Day 2 afternoon
My afternoon was taken up by poster preparations, although I did get to attend the second set of talks. Of these, two students of Adilson Motter delivered very interesting presentations.
One of these was on metabolic networks, and how the concept of synthetic lethality can be used to understand adaptive responses to physiological challenges. In this case, a challenge such as heat shock induced the mass upregulation of genes in a generalized response. However, this response is sub-optimal in the sense that it inhibits growth. The speaker (Sean Cornelius) was able to do simulations that "shut off" this stress response and subsequently maintained growth. Whether or not the same result would be seen in a natural systems is an open question.
The sub-optimality question is one I have raised in an arXiv paper with regard to human performance. That is, challenges such as exercise or high stress environments might trigger responses that immediately lead to sub-optimal physiological indicators, but in the long run produce highly optimized performance indicators. This question has also been raised more generally about fitness landscapes in the evolutionary biology literature.
My poster seemed to go over well, and will be on display until tomorrow evening. Until then......
One of these was on metabolic networks, and how the concept of synthetic lethality can be used to understand adaptive responses to physiological challenges. In this case, a challenge such as heat shock induced the mass upregulation of genes in a generalized response. However, this response is sub-optimal in the sense that it inhibits growth. The speaker (Sean Cornelius) was able to do simulations that "shut off" this stress response and subsequently maintained growth. Whether or not the same result would be seen in a natural systems is an open question.
The sub-optimality question is one I have raised in an arXiv paper with regard to human performance. That is, challenges such as exercise or high stress environments might trigger responses that immediately lead to sub-optimal physiological indicators, but in the long run produce highly optimized performance indicators. This question has also been raised more generally about fitness landscapes in the evolutionary biology literature.
My poster seemed to go over well, and will be on display until tomorrow evening. Until then......
Dynamics Days, Day 2 Morning
Day 2 of Dynamics Days is halfway over. A couple of interesting (and biologially relevant) talks in the morning sessions:
Jonathan Widom (Northwestern) delivered what ws probably the best talk of the morning on nucleosomes positioning. Apparently, nucleosomes use their own code(embedded in the genome) to find stretches of double helix to bind to. Even better, it seems as though transcription factors and nucleosomes compete for stratches of DNA to bind, with the former binding evolutionarily-conserved promoters and the latter having a preference for promoters tht no longer have a function.
Chris Fall (UIC) deviered a really interesting talk on mitochondrial modulation of Calcium channels, a function normally ascribed to endoplasmic reticulum. The mitochondria is becoming incresingly acknowledged as a complex system, and Chris' talk took that trend a couple of additional steps forward.
Both the Widom and Fall talks hinted that the mechanisms behind their respective topics were cell-type specific. Being a cellular reprogrammer, this piqued my curiosity somewhat. Perhaps these are things to look at in cells that have been reprogrammed to pluripotency.
Two other talks of note dealt with the collective behavior of cell populations. Actually, the last talk of yesterday (by Raymond Goldstein from Cambridge) also dealt with this topic. Harry Swinney told us about growth inhibition in bacteria, and Wolfgang Losert had some intriguing things to say about cell motility (in particular some video of would healing and migration in skin cells).
Jonathan Widom (Northwestern) delivered what ws probably the best talk of the morning on nucleosomes positioning. Apparently, nucleosomes use their own code(embedded in the genome) to find stretches of double helix to bind to. Even better, it seems as though transcription factors and nucleosomes compete for stratches of DNA to bind, with the former binding evolutionarily-conserved promoters and the latter having a preference for promoters tht no longer have a function.
Chris Fall (UIC) deviered a really interesting talk on mitochondrial modulation of Calcium channels, a function normally ascribed to endoplasmic reticulum. The mitochondria is becoming incresingly acknowledged as a complex system, and Chris' talk took that trend a couple of additional steps forward.
Both the Widom and Fall talks hinted that the mechanisms behind their respective topics were cell-type specific. Being a cellular reprogrammer, this piqued my curiosity somewhat. Perhaps these are things to look at in cells that have been reprogrammed to pluripotency.
Two other talks of note dealt with the collective behavior of cell populations. Actually, the last talk of yesterday (by Raymond Goldstein from Cambridge) also dealt with this topic. Harry Swinney told us about growth inhibition in bacteria, and Wolfgang Losert had some intriguing things to say about cell motility (in particular some video of would healing and migration in skin cells).
January 4, 2010
Dynamics Days, Day 1 Morning
The first few talks of Dynamics Days were quite good. My favorite was from Fred MacIntosh (who I spoke with last night at the reception) on cytoskeletal networks. He's working on active cytoskeleton networks which are made to be firm and moved around in the cell by actin/myosin motors. He mentioned an application of this to the production of motion in biomimetic devices, which is something that got me really excited (and started the wheels in my head turning). More later.....
January 3, 2010
Dynamics Days, Day 0
I just arrived in Evanston, IL (home of Northwestern University) to attend Dynamics Days 2010. It's being held at the Hilton Garden Inn, and sponsored by NICO. So far, it promises to be an excellent assortment of talks and posters. More as the conference progresses.
January 1, 2010
breve - Artificial Life software
Here's a link to a software package I've been working with recently:
breve
breve allows you to evolve animal bodies in simulated environments. The bodies are created out of polygons, and fitness is calculated by the animal's ability to move. More later.....
breve
breve allows you to evolve animal bodies in simulated environments. The bodies are created out of polygons, and fitness is calculated by the animal's ability to move. More later.....
Subscribe to:
Posts (Atom)