As Daniel discussed in January in Monkey Makes Robot Walk!, a number of researchers are working on brain-machine interfaces by attaching prostheses to monkeys. Science Daily carries a new story, Mind Over Matter: Monkey Feeds Itself Using Its Brain, about a University of Pittsburgh School of Medicine experiment in which a monkey successfully used a human-like prosthetic limb to feed itself. As the Science Daily story reports:
Using this technology, monkeys in the Schwartz lab are able to move a robotic arm to feed themselves marshmallows and chunks of fruit while their own arms are restrained. Computer software interprets signals picked up by probes the width of a human hair. The probes are inserted into neuronal pathways in the monkey’s motor cortex, a brain region where voluntary movement originates as electrical impulses. The neurons’ collective activity is then evaluated using software programmed with a mathematic algorithm and then sent to the arm, which carries out the actions the monkey intended to perform with its own limb. Movements are fluid and natural, and evidence shows that the monkeys come to regard the robotic device as part of their own bodies.
According to the team, this is the ‘first’ example of the ‘use of cortical signals to control a multi-jointed prosthetic device for direct real-time interaction with the physical environment (‘embodiment’)’ (from the abstract to the Nature article) (I’m always dubious about such ‘firsts,’ especially as this team has been announcing work on this project since at least 2004; but the research is still fascinating even if not a ‘first’).
What I especially appreciated was the discussion of how the monkey’s brain actually develops the capacity to interface with the neural sensors implanted in the brain. It’s not so much ‘mind over matter’ as it is ‘mind learning about matter,’ as Daniel’s earlier post on the monkey-robot hybrid suggested. The research team implanted probes in the motor cortex, but there was simply no way that the equipment could ‘read’ the activity of the incredibly complex motor cortex.
It takes thousands of neurons firing in concert to allow even the most simple of movements, and it would be impossible to tap into all of them, so the Pitt team developed an algorithm to fill in the missing neuron signals, allowing them to get a useable signal from a manageable number of electrodes. The algorithm they developed to decode the cortical signals acts like a voting machine by using each cell’s preferred direction as a label and taking a continuous tally of the population throughout the intended movement. (from EurekaAlert! 26-Oct-2004)
As in a lot of human neural activity, there’s a kind of ‘neural democracy,’ where it’s not so much one neuron that starts the motor ball rolling as it is a shift in the ‘neural consensus’ among a whole lot of neurons. So Schwartz’s team created sensors that could read about 100 neurons and wrote an algorithm to ‘fill in the missing information.’ In other words, the equipment acted a bit like an opinion poll, sampling the neural communication to get a feel for the consensus.
While this is fascinating, Daniel’s earlier piece highlights a dimension of this prosthesis work that we at Neuroanthropology find particularly fascinating: the way that the brain keeps updating the ‘body image’ to take account of new information. Naturally, it does this as the body grows, changes, grows weaker, gets injured, and the like, but it also does so when a Pittsburgh team sticks neural sensors attached to a prosthetic arm into the whole mix: ‘Because the software had to rely on a small number of the thousands of neurons needed to move the arm, the monkey did the rest of the work, learning through biofeedback how to refine the arm’s movements by modifying the firing rates of the recorded neurons’ (from EurekaAlert! 26-Oct-2004). That is, the brain learned about the arm and figured out how to manipulate it, not because the monkey was particularly crafty or intelligent, but because of the passive manipulation of the prosthesis by the researchers (and the monkey really wanted a marshmallow).
As Schwartz explains in the recent Science Daily piece about the new stage of the project.
“In our research, we’ve demonstrated a higher level of precision, skill and learning,” explained Dr. Schwartz. “The monkey learns by first observing the movement, which activates his brain cells as if he were doing it. It’s a lot like sports training, where trainers have athletes first imagine that they are performing the movements they desire.”
Schwartz here focuses, not on the cleverness of the lab, but on the monkey’s ability to train — using a kind of involuntary visualization — to eventually use the new limb.
There’s lots to like about this project, not only the theoretical implications, but also the potential applied benefits. I look forward to hearing more from Schwartz’s lab — this is great stuff. The only down side is, and maybe I’m the only one worried about this, should we really be putting monkeys and robot limbs together? Am I the only one who perceives the danger of bionic monkeys flinging poo about and looking for marshmallows?
Velliste, Meel, Sagi Perel, M. Chance Spalding, Andrew S. Whitford, & Andrew B. Schwartz. 2008. Cortical control of a prosthetic arm for self-feeding. Nature (28 May 2008) doi:10.1038/nature06996
University of Pittsburgh Schools of the Health Sciences (2008, May 28). Mind Over Matter: Monkey Feeds Itself Using Its Brain. ScienceDaily. Retrieved May 30, 2008, from http://www.sciencedaily.com /releases/2008/05/080528140245.htm
See also Eureka Alert: Researchers develop neural prosthesis allowing a monkey to feed self using only its brain. 26-Oct-2004. (available here)