Showing posts with label mechatronics. Show all posts
Showing posts with label mechatronics. Show all posts

June 15, 2016

Your Strandbeests Want to Engage in Sodaplay

The strandbeest Animaris Ordis in an non-native environment (a visit to MIT).

Several years ago [1], I discovered the wonder that is Theo Jansen's Strandbeests (beach beasts in Dutch). Strandbeests are mechatronic creatures partially designed using evolutionary algorithms and built to roam the sands (or are at least demonstrated at the beach). Standbeests mimic the movement patterns of biological animals, despite having only approximations of limbs and joints. Some of these creatures even have a "stomach" without conventional animal muscles [2].

The wing- and bottle-propelled "stomach" of Animaris percipiere. COURTESY: Strandbeest.com.

A Strandbeest forelimb with each segment in its optimized proportions [3]. Jansen calls these "magic numbers", but in biological terms they more closely resemble allometric scaling.

While there is great artistic (kinetic sculpture) and scientific (biomechanical) value to the Strandbeest, it can also teach us a great deal about the ability of point masses to approximate biological movement. For the Strandbeests are reminiscent of another model of movement, this one being entirely digital. This model, Sodaplay, is actually a classic internet-based application first developed around the year 2000. Sodaconstructor allowed people to build animated creatures based on point physics and an approximation of muscle activity (via central pattern generation).

Simulated strandbeest on the move. COURTESY: YouTube user petabyte99.

In the sodaplay model, a mass-spring system is used to provide structure to the phenotype. Springs (connectors) are used to approximate muscles and connect point masses, which provide inertial responses to gravity and motion [4]. These connectors can be modulated as desired, going beyond the default sinusoidal response. In general, a networked mass-spring model can be used to examine the geometric effects of a phenotypic configuration. Depending on how the points are arranged, certain ranges of motion are possible. In the case of sodaplay, certain configurations can also lead to certain death (or collapse of the model due to gravitational conditions in the virtual environment).

An example of the Sodaconstructor (seemingly now defunct). Sodaplay models (for example, Daintywalker) are reliant upon the human expertise and perceptual selection [5] rather than natural selection. Nevertheless, this form of constructivist selection has results in nearly boundless innovation, and Sodarace allows humans to innovate against a genetic algorithm.

An approximation of quadrupedal gait in Strandbeests by tracing joint and end-effector movement. COURTESY: [6].


UPDATE (6/15):
A regular reader of this blog (Dr. Richard Gordon) provided an insight that the blog's commenting system was not able to post: "It seems to me that Strandbeasts and tensegrity structures are special cases of a broader class of objects, which may be instantiated by cytoskeleton and its motor and attachment proteins".

Indeed, there are some interesting linkages between biomechanical systems and tensegrity structures that have yet to be explored. In the case of Strandbeests, Theo Jansen has actually hit upon very different (but equally functional) biomechanical systems for "limb movement" and "stomach movement". While Strandbeests do not have biological muscle (and its associated biochemistry), nor the ability to produce isometric force, they can still produce powered movements.

As is the case with homoplastic traits (e.g. bird, bat, and insect wings), both purely mechanical and biomechanical system uses identical physical principles (e.g. levers and pulleys) to produce biologically realistic movements.


NOTES:
[1] Alicea, B.   Theo Jansen, Lord of the Strandbeests. Synthetic Daisies, May 28 (2012).

[2] Revisiting this post as well: Alicea, B.   On Rats (cardiomyocytes) and Jellyfish (bodies). Synthetic Daisies blog, August 22 (2012).

[3] Thor, P.   Project 3(Strandbeest). Wikiversity, December 10 (2012).

[4] McOwan, P.W. and Burton, E.J.   Sodarace: Continuing Adventures in Artificial Life. In "Artificial Life Models in Software". M. Komosinski and A. Adamatzky, eds. Chapter 3, 61-77. Springer (2009).

[5] Ostler, E.   Sodaplay Sodaconstructor. Mathematics and Computer Education, Spring (2002).

[6] Walking Strandbeests Dynamics. Online Technical Discussion Groups, Wolfram Community.

April 30, 2016

Claude Shannon, posthumously 1100100

How do you model a centennial birthday, Dr. Shannon? COURTESY: Hackaday blog.

Claude Shannon, the so-called father of information theory, was born 100 years ago today [1]. This is a Google Doodle-worthy event, even though he died in 2001. Hence, internet rule #34' [2]: "if there exists a milestone, there's a Google Doodle for it".

April 30, 2016 Google Doodle.

Claude was also a juggler and an inventor of mechanical toys, hence the zeros and ones being juggled in the Doodle. A few years ago I wrote a post detailing this "mechanical zoo". Not a real zoo, mind you, but a collection of mechanical wonders far removed from his information theory work [3].


NOTES:
Spectrum, April 27.

[2] I made up Rule #34' as a less-provocative variant of existing Rule #34.

[3] his Master's thesis and Bell Systems Technical Journal paper (pdf) were milestones in the then- emerging academic field.

December 14, 2015

Klinotactic Thoughts and Holonomic Fun

What a week for models of movement! The first item is the most recent OpenWorm Journal Club (hosted on Google Hangouts and YouTube) called "Closing the Loop from Brain Cells to Behavior". This session explored the implications of two papers by Eduardo Izquirdo and Randall Beer [1] on C. elegans  Neuromechanics.


This work focuses on the existence of klinotaxis in C. elegans movement generation. Klinotaxis occur as small but important neural circuit generates movement signals in response to the environment. Specifically, sinusoidal movement of the head occurs as a function of central pattern generation in the brain and behavioral response to the environment.

The second item involves the BB8 droid from the upcoming Star Wars movie. As the first spherical rolling droid of the Star Wars metaverse, BB8 is also a very real mechanical prototype called the Sphero. And now you can build your own [2]! By capitalizing on a principle called holonomic motion, the body moves independently of the head, which balances on the rolling body. The following article (How does BB8 Work?) discusses the innovation and the details behind the patent registered by Disney Labs.



NOTES:
[1] Izquierdo, E.J. and Beer, R.D. (2015).  An integrated neuromechanical model of steering in C. elegans. Proceedings of ECAL, 199-206. MIT Press  AND  Izquierdo, E.J., Williams, P. and Beer, R.D. (2015).  Information flow through the C. elegans klinotaxis circuit. PLoS One, 10(10), e0140397.

[2] For more information, please see: Berkey, R.   Make your own Star Wars VII BB8 Droid. Nerdist blog, June 7 (2015)  AND  How does BB8 Work? http://www.howbb8works.com/

September 16, 2013

I, Automaton

Here are a few robotic-themed posts from Tumbld Thoughts. The first (Mechatronoids -- Artificial Muscle-heads) gives my take on the difference between robotics and mechtronics. The second (Spock vs. Spock vs. Autonomous Control) is a face-off between three kinds of highly-logical intelligence. Scroll down the page to see who wins.

I. Mechatronoids -- Artificial Muscle-heads


What is the difference between artificially intelligent (AI) robots and mechatronics? The informal answer: while some forms of AI are trying to get into University [1], bio-inspired mechatronic devices are fighting it out in the ring. 




This video of the Otherlab's bopem popem robots at this year's Google I/O conference is a nice example of pneubotics (soft robots [2] controlled by air pumps) in action. For more information, see this Synthetic Daisies feature on the OtherLab from December 2011 [3].


II. Spock vs. Spock vs. Autonomous Control


Here are clips from a rather lengthy Audi advertisement featuring a comical duel of Spock vs. Spock. Playing 3-D chess on their iPads is only the beginning. Featuring a cameo by the self-driving car from the Dynamic Design Lab at Stanford.



For more, check out the Audi Spock Challenge on YouTube. And speaking of autonomous machines, check out the DARPA Robotic Challenge, featuring RoboSimian from NASA's Jet Propulsion Lab (JPL).



NOTES: 

[1] Strickland, E.   Can an AI get into the University of Tokyo? IEEE Spectrum, August 21 (2013).

[2] For more information on soft robots, please see the Popsci soft robots tag.

[3] Alicea, B.   Tour of the OtherLab. Synthetic Daisies blog, December 1 (2011).

August 20, 2013

Fear and Loathing in Robotistan

Do you fear your (future) robot overlords? In a recent Mashable op-ed [1], John Havens argued that we should fear the future of artificial intelligence, if only for it's propensity to get things wrong and our propensity to put too much trust in the machine's output. Another emerging theme in popular culture, from fear of the coming singularity [2] to fear of the deleterious impact robots will have on job growth [3], is something I will call robo-utopianism and robo-angst, respectively.

Ken Jennings. One man who welcomes our new robotic overlords.

Is robo-angst a general fear of the unknown? Or is it a justified response to an emerging threat? I would argue that it is mostly the former. In a previous Synthetic Daisies post critiquing futurism, I postulated that predicting the future involves both unbridled optimism and potential catastrophe. While some of this uncertainty can be overcome by considering the historical contingencies involved, the mere existence of unknowns (particularly if they involve intangibles) drive angsty and utopian impulses alike.

Both of these impulses are also based on the nature of modern robotic technology. Perhaps due to our desire to cheaply replicate a docile labor force, robots represent intelligent behavior that is ultra-logical, but not particularly human [4]. Perhaps the other aspects of human intelligence are hard to reproduce, or perhaps there is indeed something else at work here. Nevertheless, this constraint can be seen and nature of tests for sentience such as the Captcha (Turing test-like pattern recognition in context) to distinguish humans from spambots.

Examples of Captcha technology. COURTESY: captcha.net

So how do we go about achieving sentience? As robo-utopians would have it, this is the next logical step in artificial intelligence research, requiring only natural increases in the current technology platform given time. Does becoming sentient involve massive increases in the ultra-logical paradigm, massive increases in embedded context, or the development of an artificial theory of mind? And if making robots more human requires something else, do we even need to mimic human intelligence?

Perhaps part of the answer is that robots (physical and virtual) need to understand humans well enough to understand their questions. A recent piece by Gary Marcus in the New Yorker [5] posits that modern search and "knowledge" engines (e.g. Wolfram|Alpha) can do no better than chance (e.g. robo-stupidity) for truly deep, multilayered questions that involve contextual knowledge. 

When robots do things well, it usually involves the aspects of human cognition and performance that we understand fairly well, such as logical analysis and pattern recognition. Much of the current techniques in machine learning and data mining are derived from topics that have been studied for decades. But what about the activities humans engage in that are not logical? 

Example of the biological absurdity test.

One example of adding to the ultra-logical framework comes from social robotics and the simulation of emotional intelligence [6]. But animals exhibit individual cognition, social cognition, and something else which cannot be replicated simply by adding parallel processing, emotional reflexivity, or "good enough" heuristics. What's more, the "logical/systematic" and "irrational/creative" aspects of human behavior are not independent. For better or worse, the right-brained, left-brained dichotomy is a myth. For robots to be feared (or not to be feared), they must be like us (e.g. assimilated).

Examples of machine absurdity. TOP: an absurd conclusion from a collection of facts, BOTTOM: deep irony and unexpected results, courtesy of a recommender system.

Perhaps shared cultural patterns among a group of robots, or "cultural" behaviors that are nonsense from a purely logical perspective and/or traditional evolutionary perspective. Examples include: the use of rhetoric and folklore to convey information, the subjective classification of the environment, and conceptual and axiomatic blends [7]. 

How do you incorporate new information into an old framework? For humans, it may or may not be easy. If it falls within the prevailing conceptual framework, it is something humans AND robots can do fairly well. However, when the idea (or exemplar in the case of artificial intelligence) falls outside the prevailing conceptual framework, we face what I call the oddball cultural behavior problem

Take ideas that lie outside the sphere of the prevailing conceptual model (e.g. spherical earth vs. flat earth, infection vs. pre-germ theory medicine) as an example. These ideas could be viewed as revolutionary findings, ideas at odds with the status quo, or as crackpot musings [8]. The chosen point-of-view is informed either by naive theory (e.g. conceptual and axiomatic blends) or pure logical deduction. Regardless of which is used, when the number of empirical observations in a given area is largely unknown, the less tied to formal models the arguments become, and wild stories may predominate. This may explain why artificial intelligence sometimes makes nonsensical predictions, or why humans sometimes embrace seemingly nonsensical ideas.

Incorporating new information into an old framework, a.k.a. the oddball cultural behavior problem. When the idea falls well outside of the existing framework, how is it acted upon?

In some cases, oddball cultural behavior is classified using conceptual blends (or short-cuts) [9] are used to integrate information. This is similar but distinct from how heuristics are used in decision-making. In this case, cultural change (or change in larger context/structures) is regulated (implemented in a combinatorial manner) by these short-cuts. One might use a short-cut (more flexible than changing a finite number of rules) to respond to the immediate needs of the environment, but because it is not an exact response, the cultural system overshoots the optimal response, thus requiring additional short-cuts.

Moving on from what robots don't do well, some of the robo-angst is directed towards the integration of people and machines (or computation). The discussion in Haven's op-ed about Steve Mann might be understood as radically-transparent ubiquitous computing [10]. Steve Mann's experience is intriguing for the same reasons that human culture is a selectively-transparent ubiquitous framework for human cognition and survival. The real breakthroughs in autonomous intelligence in the future might only be made by incorporating radically-transparent ubiquitous computing into the design of such agents.

When tasks require intersubjective context, it is worth asking the question: which is funnier to the professional clown? A robotic comedian? Perhaps, but he's not quite skilled in the art. COURTESY: New Scientist and Dilbert comic strip.

Why would we want a robot that makes rhetorical slogans [11]? Or a robot that uses ritual to relate with other robots? Or a denialist [12] bot? Before the concurrent rise of big data, social media, and machine learning, the answer might be: we don't. After all, a major advantage of robots is to create autonomous agents that do not exhibit human foibles. Why would we want to screw that up?

However, it is worth considering that these same expert systems have uncovered a lot of aggregate human behavior that both violate our intuition [13] and are not something to be proud of. These behaviors (such as purchasing patterns or dishonesty) may not be optimal, yet they are the product of intelligent behavior all the same [14]. If we want to understand what it means to be human, then we must build robots that engage in this side of the equation. Then perhaps we may see the confluence of robo-angst and robo-utopia on the other side of the uncanny valley.

NOTES: 

[1] Havens, J.   You should be afraid of Artificial Intelligence. Mashable news aggregator, August 3 (2013).

[2] Barrat, J.   Our Final Invention: Artificial Intelligence and the End of the Human Era. Thomas Dune Books (2013).

[3] Drum, K.   Welcome, robot overlords. Please don't fire us? Mother Jones Magazine, May/June (2013) AND Coppola, F.   The Wastefulness of Automation. Pieria magazine, July 13 (2013).

For a fun take on this, see: Morgan R.   The (Robot) Creative Class. New York Magazine, June 9 (2013).

[4] Galef, J.   The Straw Vulcan: Hollywood's illogical appraoch to logical decisionmaking. Measure of Doubt Blog, November 26 (2011).

[5] Marcus, G.   Why can't my computer understand me? New Yorker Magazine, August 16 (2013).

For a take on recommender systems and other intelligent agents gone bad (e.g. the annoying valley hypothesis), please see: Moyer, B.   The Annoying Valley. EE Journal, November 17 (2011).

[6] Dautenhahn, K., Bond, A.H., Canamero, L., Edmonds, B.   Socially Intelligent Agents. Kluwer (2002).

[7] Fauconnier, G. and Turner, M.   The Way We Think: Conceptual Blending And The Mind's Hidden Complexities. Basic Books (2013) AND Sweetser, E.   Blended spaces and performativity. Cognitive Linguistics, 11(3-4), 305-334 (2000).

[8] For an example of oddball and potentially crackpot ideas in science, please see: Wertheim, M.   Physics on the Fringe: Smoke Rings, Circlons, and Alternative Theories of Everything. Walker & Company (2011) AND Horgan, J.   In Physics, telling cranks from experts ain't easy. Cross-Check blog, December 11 (2011).


[9] Edgerton, R.B.   Rules, Exceptions, and Social Order University of California Press, Berkeley (1985).

[10] For an interesting take on Steve Mann's approach to Augmented Reality and its social implications, please see: Alicea, B.   Steve Mann, misunderstood. Synthetic Daisies blog, July 18 (2012).

[11] Denton, R.E.   The rhetorical functions of slogans: Classifications and characteristics. Communication Quarterly, 28(2), 10-18 (1980).


[13] For an accessible review, please see the following feature and book: 

Lohr, S.   Sizing up Big Data, Broadening Beyond the Internet. Big Data 2013 feature, New York Times Bits blog, June 19 (2013).

Mayer-Schonberger, V. and Cukier, K.   Big Data: A Revolution That Will Transform How We Live, Work, and Think. Houghton-Mifflin (2013).

[14] Similar types of behaviors (e.g. the Machiavellian Intelligence hypothesis) can be seen in non-human animal species. For classic examples from monkeys, please see: Byrne, R.W. and Whiten, A.   Machiavellian Intelligence: Social Expertise and the Evolution of Intellect in Monkeys, Apes, and Humans. Oxford University Press (1989). 

July 20, 2013

Human Augmentation Short Course -- Part III

The next two #human-augmentation flash lectures from my micro-blog, Tumbld Thoughts will feature several potential implementations of Intelligence Augmentation (IA), Augmented Cognition (AugCog), and its integration with smart devices. This includes two topical areas: I (Bio-machine Symbiosis and Allostasis), and II (Augmentation of Touch).

I. Bio-machine Symbiosis and Allostatis



The book "The Symbiotic Man" by Joel DeRosnay can be used to frame a graphical discussion on bio-machine symbiosis (e.g. human-smart home interaction) and the concept of mixed allostatic networks. In this case, the symbiotic relationship is between a biological system and a technical one. While there are fundamentally different dynamics between these two types of systems, the fusion of their interactions are not only possible but essential.


As discussed in previous slides, measurements from a human can be used to provide intelligence to the house (in this case, scheduling and other use information). A mitigation strategy can be used to extract information from the collected data and provides instructions for machine learning.

Measurement of the human can be taken on physiological state (e.g. measurements of brain activity or state monitoring of other organs). This can be done using microelectronics, and the measurements must cross a semi-permeable boundary which is selective with respect to available information. Nevertheless, this network allows us to construct a consensus approximation of the body's homeostatic control mechanisms.


This allows us to construct mixed allostatic networks. A mixed allostatic network includes elements from both the house (e.g. appliances) and the human body (e.g. organs). This has already been done by integrating body area networks and domotic networks.

The key innovation here is to unite the function of both networks under global, allostatic control. When the allostatic load of this network becomes too great, this information can be used to modify the mitigation strategy. This may be done in a manner similar to DeRosnay's Symbionomic Laws of Equilibrium.



II. Applications related to the Augmentation of Touch

In this installment of the #human-augmentation tag, we will discuss an assortment of applications that have the potential to augment the sense of touch and upper body mobility. 


The first technology was recently featured in IEEE Spectrum's startup spotlight. The Italian startup Prensilia [1] is working on a robotic hand called Azzurra. The fully artificial hand mimics human grip by using underactuated movements. Inside the hand, the rotary motion generated by a motor is translated to linear actuation to produce biological (e.g. muscle generated) types of motion.

The second technology features the DARPA initiative to create better prosthetic arms. In this video from IEEE Spectrum, the work of Dean Kamen and his group at DEKA Research is profiled. This type of prosthetic arm uses bioelectric signals from chest muscles in combination with servo motors to enable both fine motor and ballistic movements.


The third technology is simulated touch, which unlike the last two does not explicitly involve artifacts. Touch is a physical phenomenon, as contemplated in this Minute Physics video. However, touch also involves human perception, as discussed previously on Tumbld Thoughts. A thourough understanding of this sense allows us to build better ways to interact with virtual environments and robots using touch [2]. 

COURTESY: Chapter 4 from [2b].

NOTES:

[1] Cipriani, C.   Startup Spotlight: Prensilla developing robot hands for research, prosthetics. IEEE Spectrum, July 18 (2013).

[2] The second image from bottom is a LilyPad Arduino project. For more information on the engineering of touch, please see these two books:

a) McLaughlin, M.L., Hespanha, J.P., and Sukhatme, G.S.   Touch in Virtual Environments: haptics and the design of interactive systems. Prentice-Hall, Upper Saddle River, NJ (2002).

b) Bicchi, A., Buss, M., Ernst, M.O., and Peer, A.   The Sense of Touch and its Rendering. Springer, Berlin (2008).

September 19, 2012

Claude Shannon's Mechanical Zoo


Portrait of Claude Elwood Shannon, courtesy of Fan Chung Graham.

It's time for some fun courtesy of Claude Shannon, the legend of Bell Labs. The father of information theory was also an inventor of great renown. These inventions did not lead to useful, everyday products, but did involve some incredible engineering. I ran across some classic footage on YouTube for two of these which is always fun to watch.


Above is a picture of his Juggling Diorama, a mechanical automaton built for the simulation of juggling [1]. It is a classic simulation in the sense that a reduced physical model of the process can help clarify the algorithmic and mathematical structure of juggling as a mental and physical process. To this end, Shannon also worked out a Juggling Theorem (an application of combinatorics), which is apparently reducible to a T-shirt slogan.

Shannon also designed small autonomous mechatronic rodents, pioneering the design of such devices years before similar research- and consumer-oriented application came to fruition. It is also a very early attempt at machine learning, based on cybernetic principles in fashion at the time.


The picture above is an example of a mechanical rodent he designed being put into a maze (a contraption called "Thesus"), some 40 years before the Khepera robot came on the scene [2]. For a comprehensive tour of past and present robots, mechatronic devices, and mechanical automata, visit cyberneticzoo.com [3].


NOTES:
[1] more on the science of juggling can be found here and here. And an overview on a German museum exhibit a few years back called Codes and Clowns.

[2] there are more stories about Shannon's mechanical rats roaming the halls of Bell Labs in the recently published book "The Idea Factory".


[3] quite an impressive overview of some really obscure technology. My favorite is the mechanical animal compendium.

May 28, 2012

Theo Jansen, Lord of the Strandbeests


I recently discovered the work of kinetic sculptor Theo Jansen [1], who has a rather unconventional take on artificial life. His version of artificial life does not involve computer simulations, but rather involves building physical analogues of animal morphologies (he calls them "Strandbeests", as the skeleton is not made of bones, but plastic tubes).

Figure 1. Theo Jansen's website: http://www.strandbeest.com/

It isn't just animal-inspired structural engineering -- these things can actually move! More accurately, they are self-actuating, as human must provide the muscle power (for now). Check out a feature on the Makezine Blog (link) and video of a moving "Rhinoceros" (link) for more information [3].


Figure 2. Static Image of Theo Jansen's Rhinoceros, reproduced using LEGOs [4]. COURTESY: Makezine blog.


One of his made-up species, called Animaris geneticus parvus [5], can be directly replicated (via 3-D printer) and available from Shapeways (3-D schematic clearinghouse).

Figure 3. Image of Animaris geneticus parvus.

Notes:
[1] Theo Jansen's TED Profile and Wikipedia Profile.

[2] Check out a post I did last December on pneumatic actuation for more information on biomimetic actuation systems in robots and mechatronic creatures.

[3] Check out the Strandbeest Vimeo channel for more videos.

[4] Yes, LEGO is an acronym. A LEGO fanatic will go to great lengths to correct you when written as "Lego".

[5] I guess this could be roughly translated as "cheap replication of an animal". They are clones, but it would be interesting to see what would result from a few random mutations of the schematics (1 out of every 1000 downloads would have a segment missing, or pointing the wrong way, etc).

December 1, 2011

Tour of the OtherLab

While doing a recent search on up-and-coming technologies and cutting-edge science, I stumbled upon some YouTube videos of soft robots, and subsequently found the website of OtherLab. Other Lab is a collective of scientists and inventors involved in a number of projects, including proof-of-concept mechatronics that might be useful in building functionally adaptive and intelligent machines. In this post, I will review a number of videos from their YouTube channel.


Similar to a number of physiological pumping processes, here is a peristaltic pump design built by the lab. The process of peristalsis is the main actuation process in smooth and skeletal muscle systems. Peristalsis allows for symmetrical displacements of the muscular surface that occurs in waves and moves fluids through the body. The gastrointestinal tract and four- chambered heart are two examples of this. This pump design, optimized for modeling muscular output during joint flexion, is a key component of OtherLab's pneubotic machines.


This video shows the strategy OtherLab is using to design their soft robots and other creations. To fabricate motors, robots, and other mechanical things OtherLab specializes in, they use a CNC (computer numerical control) machine technique called "nesting". I have discussed rapid prototyping in earlier posts, particularly as it relates to building physical models. In this case, OtherLab is using a technique called nesting. In fabrication programming (as opposed to computational programming), nesting refers to the laying out of multiple parts, some entirely contained within others, on a single surface. This allows for dynamic nesting, which allows for the management of many part sizes and shapes simultaneously. A complex layer is then created, which is subsequently cut and incorporated into a technological design.


Pneubot stands for "pneumatic robot", or a robot that is actuated by pneumatic technology. A pneumatic technology involves the use of compressed air to drive mechanical motion. The compressed air can be moved through soft, balloon-like tubes, which allows for both rigidity (when filled) and flexibility (when decompressed or empty). In this video, an elephant-shaped pneubot is used to demonstrate the level of motor control allowed using this technology. The OtherLab is developing this technology in concert with Manu Prakash's lab at Stanford and DARPA's Maximum Mobility and Manipulation program.


In this video, a soft-bodied "crawler" robot moves across a flat tabletop and does the limbo, demonstrating the flexibility of gait and movement achievable with soft robot bodies. This "crawler" bot is another instance of the pneubot, just scaled down considerably. The idea of a soft-bodied robot is generally new, and can be applied to a number of medical and industrial problems.


Pneumatics are not only used to build selectively compliant skeletons for robots. They can also be used as braces and muscles, exhibiting rigidity when required. In this video, a knee brace is demonstrated that can force a human knee joint to full extension.


According to the embodiment school of thought, a robot's body and brain are dependent on each other, and interact accordingly. The soft robot examples do not come with an on-board brain. Fully autonomous control (or the robot's ability to control its own beahvior) is a "holy grail" of robotics, as it allows for both remote and on-the-fly operation that does not rely on human input. In this example, the OtherLab group demonstrates autonomous control in a model aircraft, which can maintain a circular heading without external commands.


This is another example of autonomous control, this time in the form of actuation. This form of autonomous actuation mimics tropic behaviors observed in many plants and animals. The mirror array shown in the picture/video moves with the sun (e.g. heliostatic), or in this case another light source that moves around the environment with respect to the stationary mirror array. This kind of actuation is currently used in very large solar panel arrays and solar furnaces.

November 21, 2009

Tongue-in-cheek Science

Here are a few tongue-in-cheek observations on pop culture science. The first is a timeline of stellar and human evolution as proposed in the Big Bang Theory opening theme song (sung by the masters of tongue-in-cheek, The Barenaked Ladies).


Not sure if there are creationist undertones to this song or not -- I'm sure these tendencies are a result of artistic license rather than other intent.

Timeline of our origins, as sung by The Barenaked Ladies.

Secondly, it seems that everyone is publishing a book these days. For example, I have a new book in the works.....


...or not. The important thing to remember is that it could be a book. And yes, due to the fast-moving nature of the biomechanics of fictional creatures (e.g. AT-AT) field, it's going straight to the 3rd edition.





Printfriendly