I’m not usually the one blogging on video games; this tends to be Daniel’s department. After all, he’s got three boys at home, and I live with two horse-obsessed women, so it’s a bit out of my habitual orbit. I get more interaction with tractors than video game consoles. But Daniel tossed this reference my direction, and I decided to write on it for a number of reasons (thanks to Daniel).
According to a recent post on Psyorg.com, ‘The Wiimote as an interface bridging mind and body,’ a research team led by Rick Dale at the University of Memphis has been using the Wiimote from Nintendo to study how people reach as they learn new tasks. As the Psyorg story discusses, the Memphis team taught people a symbol matching task and used the Wiimote to judge the quality of their movements when doing the task.
As people learned, their bodies reflected the confidence of that learning. Participants moved the Wiimote more quickly, more steadily, and also pressed on it more firmly as they became familiar with the symbols. While everyone knows that you get better at moving in tasks that require intricate movement (such as learning to use chopsticks), these results suggest that your body movements are related to learning other information as well.
In fact, I didn’t find the relationship too surprising, probably because I was reminded of Esther Thelen’s wonderful experiments with children studying the ‘A-not-B’ reaching error, an experiment first done by Piaget to show that children of up to around 10 months of age did not understand object permanence. In fact, what Thelen found is that the children in her re-enactments of Piaget’s experiments were not just having conceptual problems when they reached for the wrong place after multiple attempts in which the object actually was at the ‘wrong’ place; they were having motor-conceptual problems. The reaching act itself, which Piaget thought was the transparent result of the infants’ understanding (in this case, of where an object would be), was in fact part of the task the children attempted. Reaching was part of the challenge of what they were doing, and factors having more to do with the motor part of the act, rather than just their understanding, confounded the motor-conceptual reaching for the ‘right’ place.
The research builds upon other studies that have found that cognitive processes interact with motor activities, unfolding in time and dynamic in their nature. That is, like Piaget, some researchers try to treat cognition in isolation, seeing actions that ‘express’ cognitive understandings as a sort of second stage, the consequence of cognition, but not really affecting thought. Following this model, actions are just the symptom of understanding, like language is just the expression of ideas, not fundamental to their constitution (an example we’ve been discussing).
In addition, the research demonstrated how ‘low timescale processes,’ that is, immediate effects, are linked to ‘longer timescale processes.’ Specifically, the experiments showed ‘covariation between action patterns unfolding in 100’s of milliseconds and learning that is taking place across many minutes,’ one of the more profound obstacles in current cognitive science (like scale problems in anthropological theory between subject-level experience and large-scale social-historical processes). As Dale and colleagues write:
Whatever one’s choice of theoretical banner, this exploration of cognition and action addresses a fundamental challenge facing the cognitive sciences: to bridge the various levels of complexity relevant to human brain and behavior. In this context, an outstanding puzzle is further elaborating the systematic relation between low-level, short-timescale characteristics of movement and high-level, longer-timescale processes, such as learning.
Although I like the article, I think the experiment design, especially choosing a manually simple task, could be questioned. As Dale et al. write: ‘Thus the task does not inherently involve the motor dynamics to accomplish it –- the participants are learning symbol pairs, not movements.’ First off, I’m not sure that’s the case, entirely. That is, the movements may be pretty simple and the people doing the experiments might be accomplished at moving their arms, but they may in fact be learning how to move a computer interface in empty space. In a parallel case, moving a joystick controller or using a mouse is also not as easy as it seems to those accomplished with these devices; we quickly forget how we had to learn to use them (something I rediscovered when my new Apple laptop came with a touch pad rather than a mouse or roller ball). But one of the points of the article is that the subjects become more ‘confident’ in their movements; although the manual task might be simple, wouldn’t it still be expected that people would become more ‘confident’ when doing even a simple task?
Second, why choose this sort of task — matching symbols on a screen — if the research team wants to show the imbrication of motor and cognitive learning? Or, more specifically, why suggest that the ‘task does not inherently involve the motor dynamics,’ when it seems very much to involve exactly those motor dynamics? This is what I think is so powerful about Thelen’s research (and other research that the authors also cite): even very simple ‘cognitive’ tasks are invariably more complicated than research designers might think, involving perceptual and motor facets, for example. The task that subjects are asked to do is inherently motor-sensory-cognitive, both in terms of its material demands, and in terms of the way that the embodied brain goes about solving it, no matter what the task. For example, classification of symbols clearly involves perceptual dimensions, even in recalling the target symbol, and then indicating which symbol always requires some sort of overt act — pointing, speaking, typing, or something. That is, I find the authors a bit inconsistent in their commitment to the motor-cognitive integration.
On a significantly different note, the research methods used by the team are also intriguing, and the PsyOrg web article focuses heavily on this (and the Wiimote). The use of off-the-shelf hardware like the Wiimote promises to be one way that labs might short-circuit some of the extraordinary expense and technical demands of developing new technology for experiments. The ‘Brain-reading headset’ discussed by Daniel is another example. I’ve been thinking very hard about how I might use off-the-shelf technology to do sports research, especially in the field, in a controlled but naturalistic environment. This would require some manageable, portable, dependable hardware and software. I’ve been wondering if I could find some way to hitch together EA Sports’ Rugby 2008 (or whatever the current incarnation), some basic eye-tracking equipment, an interface (like the Wiimote) to track how a ball-handler would pass the ball, and then, if possible, some sort of brain imaging capacity, just to be able to tell how players’ are going about what they do, if they are doing it in predictable or similar ways, and if skill is linked to some particular constellation of all these factors (or constellations, if there is more than one stable way to act skillfully). Putting this whole suite of equipment together is one of my long-term ambitions, but I’m not at the stage of the rugby project yet to have working hypotheses to start testing with such equipment.
Dale Rick, Jennifer Roche, Kristy Snyder, and Ryan McCall. 2008. Exploring Action Dynamics as an Index of Paired-Associate Learning. PLoS ONE 3(3): e1728. doi:10.1371/journal.pone.0001728 (http://www.plosone.org/doi/pone.0001728)