This blog post will focus on the recent (and not so recent) attempts to quantify, control, and augment intelligent performance-related behavior in human beings. The intersection of human intelligence and artificial intelligence by way of human performance goes by the name of
Augmented Cognition. Augmented Cognition, generally regarded as a domain of
Human Factors engineering, also has broad applications to
human-machine systems. Relevant application domains could range from automotive and transportation performance to human interactions with information technologies and bioengineered prosthetic devices.
Augmented Cognition is distinct from
traditional artificial intelligence, in which a general purpose intelligence is constructed
de novo to control all aspects of intelligent behavior. Rather than machine intelligence compensating for the shortcomings of human intelligence, human intelligence compensates for the shortcomings of machine intelligence. Academic interest in this set of problems began in the 1950's [1], while contemporary approaches have included information technologies and
DARPA's
Augmented Cognition project. As applied to technology, this work falls into the broader category of human-assisted intelligent control.
There are two main components of augmenting human intelligence using computational means. The first is a
closed-loop system which involves a feedforward and a feedback component between the individual and a technological system that enabled augmentation. This could be a
heads-up display, a
mobile device, or a
brain-computer interface controlled by a real-time algorithm. The second is a model of human performance for a given set of cognitive and physiological functions which determines a control policy. Examples of both are provided below, along with a consideration of open problems in this field.
Closed-loop System Design
In an article from the 1950's [2],
W.R. Ashby took a cybernetic approach to first-order (e.g. no intermediate variables) intelligence augmentation (Figures 1-4). While somewhat crude by modern standards (by which we use sensors to gain real-time measurements of physiological state), it does lay out a simple theoretical model for augmenting cognitive and neural function.
Figure 1. Highlight for the X component.
Figure 2. Highlight for the G component.
Figure 3. Highlight for the S component
Figure 4. Highlight for the U component.
In the Ashby model, the feedforward component (G) was the intelligence of the user applied to performance captured by the device. This might be driving performance, or accuracy in moving an object. While the idea that intelligence can be distilled to a single variable is controversial, modern applications have used variables such as accuracy counts or a specific
electrophysiological signal to "drive forward" the system. The amplifier (S) itself gathers the feedforward elements of G and operates on them in a selective manner. This can be treated as either an
optimization problem [3] or an
inverse problem [4], and defines the control policy imposed on the performance data. In the
Yerkes-Dodson example shown later on, a
minimax-style optimization method is used. The feedback element (U) is a signal taken from the information in G and should contribute to an improvement in performance, or subsequent measurements of G.
Contemporary Models from Human Performance
More contemporary models for augmenting human performance [5,6] have involved mapping closed-loop control to a physiological response function. Figures 5 through 7 show how this works in the context of the Yerkes-Dodson curve. The Yerkes-Dodson curve is an inversely U-shaped function that characterizes arousal in the context of some physiological measurement. At both low and high values of the physiological indicator, the level of arousal is low. At moderate values of the physiological indicator, the level of arousal is high. The goal of an amplifier (also called a mitigation strategy) is to maintain performance (defined as measured arousal) among the highest range of arousal values.
Figure 5. Example of a physiological response function (e.g. Yerkes-Dodson curve).
Figure 6. Example of a mitigation strategy.
Figure 7. Keeping performance within an optimal range.
Two outstanding problems
There are two potential challenges to this control policy: a reliance on convexity and complete measurement of a physiological state. The example shown here has relevance to arousal and attention. It has attracted attention because of its relative ease of mitigation. The development of brain-machine interfaces has likewise focused on simple-to-characterize physiological signals (such as
population vector codes for movement [7] or
spectral bands of an EEG [8]). However, not all physiological response functions are so simple to characterize. In cases of significant
non-convexity (or cases where the response function does not form smooth, convex gradients), it may be quite difficult to mitigate suboptimal behavior or physiological responses [9]. In such cases, there could be multiple optimal points each with very different performance characteristics.
The complete measurement of physiological state is another potential problem with this method. While fully characterizing a physiological or behavioral process is the most obvious difficulty, the adaptability of a physiological system to repeated mitigation is a more subtle but important problem. In some cases, the physiological response will
habituate to the mitigation treatments and render them ineffective. In the case of presenting information on a heads-up display, users might simply tend to ignore the presented cues over long periods of time. It might also be that encouraging rapid changes in arousal level is more effective than encouraging a fixed level of performance over time. In both strength training regimens and more general physiological responses to the environment, switching between stimuli of alternating intensities can have a complex and ultimately adaptive consequences on the long-term response.
Incorporation of intelligence augmentation into the design of a technological system is an ongoing challenge. In a future post, I will focus on why certain aspects of human and animal intelligence are fundamentally different from and can potentially aid and complement current approaches to machine learning and artificial intelligence.
References:
[1] Ashby, W.R. (1952). Design for a Brain. Chapman and Hall, London.
[2] Ashby, W.R. (1958). Design for an Intelligence Amplifier.
In Automata Studies. Shannon, C.E. and Ashby, W.R. Princeton University Press, Princeton, NJ.
[3] an optimization method uses some objective criterion to select a range of values thought to either minimize or maximize system properties.
[4] an inverse problem is one where the solution is known, but the route to that solution is not.
[5] Schmorrow, D. D. & Stanney, K.M. (Eds) (2008). Augmented Cognition: A Practitioner's Guide. HFES Publications.
[6] Fuchs, S., Hale, K.S., Stanney, K.M., Juhnke, J., and Schmorrow, D.D. (2007). Enhancing Mitigation in Augmented Cognition. Journal of Cognitive Engineering and Decision Making, 1(3), 309-326.
[7] Jarosiewicz, B., Chase, S.M., Fraser, J.W., Velliste, M., Kass, R.E., and Schwartz, A.B. (2008). Functional network reorganization during learning in a brain-computer interface paradigm. PNAS, 105(49), 19486-19491.
[8] Lotte, F., Congedo, M., Lecuyer, A., Lamarche, F., and Arnaldi, B. (2007). A review of classification algorithms for EEG-based Brain-Computer Interfaces. Journal of Neural Engineering, 4, 1-24.
[9] Alicea, B. The adaptability of physiological systems optimizes performance: new directions in
augmentation. arXiv Repository, arXiv:0810.4884 [cs.HC, cs.NE] (2008).