Earlier today, I wrote a post on Bruce Wexler’s book where I suggested that ideology and ‘culture shock’ were not necessarily the best case studies to work with when discussing the integration of social theory with neurosciences. My reasons for this are many, but they boil down to a fear that, if we choose our case studies poorly, we will not offer compelling integrated accounts that bring together biological studies of the brain and humanistic studies of society and culture. It may have seemed that I was not overly generous to Wexler, however, even though I quite like his work, so I thought I’d balance this out by giving some examples of ways that anthropologists have similarly chosen examples that make it especially difficult to present coherent accounts across different scales and perspectives on a subject.
One of the best/worst examples of attempting to prematurely bridge the gap between culture and brain science is the concept of ‘memes.’ First proposed by Richard Dawkins in 1976 (in The Selfish Gene), a ‘meme’ is defined by Dawkins as the smallest unit of cultural information, which spreads from one person to the next through diffusion, sort of like an infection. Dawkins and other ‘memeticists’ (is that a word… or a meme?) are at pains to argue that culture propogates itself, like a catchy tune you can’t get out of your head or a fashion you must have that you then make your friends crazy to imitate, because of the effectiveness of the meme, not because it is useful to the bearers. Proponents also argue that, although there are significant differences with genes, evolutionary theory can also be applied to memes to understand how cultural ideas spread, develop, change, or become extinct.
So what’s the problem with memes?
Where do we begin? In no particular order: 1) genes can mutate and develop independently of each other; cultural ideas are often linked; 2) genes generally don’t change in an individual (although see the case of the girl who’s blood type changed with a liver transplant), memes, in contrast, shift in individuals constantly; 3) genes are empirically demonstrable; memes are, even if we are generous, a metaphor; 4) it is virtually impossible to clearly and unequivocally determine what the ‘elementary particle’ of an idea is; 5) ‘memetics’ renames cultural information but is not based on any consistent theory of how a ‘meme’ is stored in the brain (for example, what about the fact that memory is a reconstitution of information, subject to distortion and systematic bias); 6) the empirical problem that there just aren’t that many compelling scientific projects generated by memetics; and 7) the over-arching danger that this is simply an analogy mistaken for a reality (or two analogies, as ‘memes’ seem to fluctuate between gene-like and virus-like).
But the bigger issue is not the failure of the meme model, but the premature attempt to leap the explanatory chasm between genetic-level structures, evolutionary theory, and ‘bits’ of culture like tunes, religious ideas, or technology. That is, memetics tries to finesse away a lot of intermediate-level analysis and modeling. If you want to explain why a particular tune is catchy and goes on to the Billboard Top Ten, you’re going to have to theorize a lot more and draw in a lot more evidence about things like the recording industry, the history of musical styles, the emergence of particular artists, marketing, recording technology, audience’s aesthetic expectations, the phenomenology of recorded music, culturally-based discussions of music perception, auditory science, and the like. You can’t simply say that the tune is like a gene-virus (a meme) and that it must be a successful replicator because, in the evolutionary environment of American pop radio, it has out-competed other tunes to reproduce. Heaven help us if that is the way we’re going to explain the career of Justin Timberlake.
The point for me is that ‘memetics’ is virtually doomed from the start because the explanatory gulf it attempts to bridge is simply too great. And so like Ptolemy’s category of ‘planets,’ an unworkable, un-theorizable concept because it included the sun, moon, and stars, the ‘meme’ is simply too great a herd of varied beasts to be able to say too much coherent about it.
But Dawkins and memeticists are hardly alone in choosing examples that are just too damn hard to start with in forging a neuroanthropological synthesis. My own feeling is that a lot of early cognitive anthropology has suffered from the same problem, although perhaps to less of a degree and with less problematic results. That is, some of the cognitive anthropology based upon connectionist theory sought to tackle problems that are probably within the scope of our research and knowledge of brain systems; for example, the relation of language categories to object perception (one that Daniel discussed earlier with his posting on color) seems to me to be ambitious but conceivable.
In contrast, trying to explain the neurology of religion seems to me to be a project with a reach that may exceed our grasp for a lot of reasons. Firstly, we, anthropologists, can’t even really agree on what a ‘religion’ is, whether or not all humans have ‘religion,’ what the most essential parts of religion are, or the like. Some studies of religion take a very ‘Protestant Reformation’ line on the definition of religion: religion is beliefs in the supernatural. Other theorists take a more ‘Ethical Society’ approach to religion: religion is a community-based system for regulating behaviour and ideals. And religious people themselves seem to have radically different sorts of things in mind when they talk about religion: for one it seems to be an emotional sense of connection, for another a self-righteous certainty that punitive justice will be handed out, for another it is an expectation of being surrounded by unseen forces, for another it is a system of forces that can be placated, manipulated, negotiated with, and managed. As a person interested in the brain, I can think of a lot of different sorts of mental processes that might be involved in ‘religion’ depending upon which part of it, or which brand of it, you were using as your test case.
Second, religion is a hard case study to work with because it is a social phenomenon, characterized in part by broad community adherence. This means that trying to explain religion from the qualities of the brain is a bit like trying to explain British politics simply through reference to the qualities of the brain; there’s no way that you could get to the institutions, the parties, the way that elections go, and the like without a more complex model that placed history, social groups, and a wider environment into the dynamic system. Likewise, moving directly from the brain to religion simply leaves too many elements off the table that we will need to understand the phenomenon.
For these reasons, I think neuroanthropology has to choose its cases carefully, focusing on intermediate level phenomena rather than trying, too precipitously, to explain political economics, or social identity, or ideology, from studies of the brain. Like any analytical perspective, we need to prove the effectiveness of the neuroanthropological approach on material for which it is ideally suited before we move on to the more tenuous, exploratory challenges. I think this is also what Daniel was getting at when he asked, a while back, in his post Engaging Anthropology and Social Theory, ‘what strands of social science research offer the most immediacy to our work?’ We need to find the intellectual opportunities, where our style of dynamic brain-culture explanation is on the strongest ground, and the research questions before us demand the sorts of explanations we can provide. So when I criticize Wexler for over-reaching, it’s probably mostly because I fear the tendency so much and worry that it may undermine the long-term legitimacy of our undertaking in the eyes of the skeptical.