First, a shout out to all the faithful readers in cyberspace on the Eve of Christmas. (Yes, yes, I know, my first post went up a week ago.) Have a great Holiday!
Now let me get on with gaming and culture. Today I want to talk about how gaming illustrates the need to rethink what we mean by the concept “culture.” The traditional concept of culture is generally seen as something all around us, shaping our every move—Geertz’s system of symbols, where humans are caught up in webs of cultural meaning. Even in the wake of 1980s and 1990s post-modern critique, we are still left with rather homogeneous and causal views—for example, Bourdieu’s habitus, derived from class and used an explanation for the differing tastes and behaviors of different groups of people; or the emphasis on discourses or ideologies that people cannot escape, so that discourses on gender and race seemingly define who we are and, in making that definition, provide others with power over us.
Greg has already started the critique. Here’s what he writes in his post Mirror Effects in Neurons Learned?:
“The evidence from the brain sciences does not support the assumption that all implicit learning has ideational foundations or backing, but most models of culture really do not allow for motor learning to exist on its own as a relevant category of culture. I know, some will try to call me out on this and argue that late Pierre Bourdieu’s notion of the habitus is really a motor learning theory, but the fact that he has to assume that there is either a sociological structure (class) or cultural structure (a kind of crypto-structuralist cognitive set of categories) behind all action suggests that it is, ultimately, either a sociological- or cognitive-determinist model, not one that allows motor realms any autonomy.”
So, how about some gaming autonomy? Let me turn once again to my trusty Game Informer in its January 2008 edition. In yesterday’s post, I talked about how games offer us an immersive and interactive experience. I want to expand on that post by focusing specifically on how designers utilize something close to the concept of “culture” as one part of how to make games immersive and involving. In the feature article on the first person shooter Tiberium, which builds off the real-time strategy franchise Command & Conquer, the article goes:
“To help Tiberium break out as a unique game in the Command & Conquer series, EA [Electronic Arts] knew the company needed to dive deeper into the series mythology. ‘We’re not Star Wars or Lord of the Rings, but we do aspire to be like that, to be a universe worthy of devotion,’ says Plummer. ‘We didn’t try to rewrite the Tiberium universe, but looked for where the holes were and tried to patch them up.’ The patching started with the Command & Conquer team creating a Tiberium ‘bible,’ a fictitious artifact from the future written by an archivist who had lived through the Tiberium wars. Not content with merely recapping past game plots, the bible dives so deeply into the C&C world that it even gets into the scientific explanation for Tiberium [a mutagenic crystal starting to cover the Earth and harvested by bad-ass extraterrestrials]… The bible also fleshed out the extensive history of the game’s three warring factions: the United-Nations-esque Global Defense Initiative (GDI), the overzealous religious faction Brotherhood of Nod, and the Scrin, the mysterious new alien race… But mythology is only one aspect of creating a believable universe. It must have a distinct look and feel. ‘I want the world to be a place you want to spend time,” says Plummer. ‘My favorite shooters are the ones where the world is a cool place. The gameplay has got to rock, but it also needs to feel like a believable fantasy that holds together with conviction’.”
Plummer’s ending quote brings us back to yesterday, and the importance of look and feel in the interaction with games. But Plummer’s overall discourse raises a number of other questions. Instead of assuming that “culture” has automatic causal force which people automatically buy into, Plummer points to the importance that the game universe be “believable,” that the different elements hold together “with conviction,” that the game provides history and explanations, and that the game’s universe inspire devotion and doesn’t have gaping holes.
Why does he emphasize these things? Because Plummer cannot assume that cultures have automatic buy-in (or causal force). He literally competes for buyers, who could just as well purchase another shooter with roughly the same gameplay and feel but a more convincing “mythology.” I am not trying here to extend a competitive view onto culture (for me, culture gets “buy-in” largely through the process of human development), but rather point out that Plummer cannot afford to make the same assumptions about “culture” that most anthropologists do. If he did, he’d be out of a job. While our outsider view and critical stance provide us insights that very few game developers will ever have, their insider view of how people “get” a culture has lessons to offer us.
What are those lessons? There are at least three—his emphasis on the devotion people can develop (an emotional side), the importance of providing history and explanations (a cognitive side), and the need to patch holes in the perception of the Tiberium universe (a perception side). Greg would surely want me to add a fourth, a motor side, which connects directly into fun gameplay that helps to make the universe convincing through physical interactions that are engrossing and without interruption. Of these four, I will hone in on the cultural perception lesson.
Plummer knows that gamers, like everyone else, bring things to the table (or the console, in this case). Thus, they will pick up on the gaping holes, and for some people, that will be enough to turn them off. Plummer definitely sees this cultural perception by players as part of the way to build devotion to an overall universe. He knows that people pick up on what’s missing. Rather than asking how and why, as an anthropologist might, he sets out to do something about it. I’ll play the role of the anthropologist.
I am reminded of Chomsky’s devastating critique of the B.F. Skinner’s behaviorist approach to language. Chomsky pointed out that there simply is not enough time and not enough complexity in input for children to learn language based solely on hearing it in their local environments. They must come equipped with “something extra,” which Chomsky proposed as “generative grammar.” People today argue over whether that grammar is universal or emergent, evolved or culturally learned—in other words, they argue over the nature-nurture debate.
I’d rather start with something more basic. Children come equipped with brains. Brains quickly develop expectations about their environments. When those expectations are not met, brain systems react in all sorts of interesting ways—the mismatch can be used as a learning signal, can simply be ignored, or can lead to lots of dissonance, among other things. So I am going to assert that people have cultural perception, a set of cultural expectations that guide their interactions with the worlds. When those expectations are not met, there are at times gaping holes and at other times a simple glossing over (these sorts of reactions often depend on the context at hand). My guess is that these cultural expectations mix innate and learned patterns in ways that make the nature-nurture debate moot.
As visual perception research shows, many different lines of information in the brain get processed and combined together to create what we see. Take certain elements away, and you can get interesting phenomena like “blind sight” where people have no conscious perception of sight but are still neurologically aware of objects in their field of vision.
Cultural perception operates similarly, in that different lines of information are combined into a holistic vision or, better put, “meaning.” When certain expectations are not met, people notice holes. Alternatively, in ambiguous or familiar situations, people will often engage in cultural filling-in, similar to what happens with visual filling-in. Game designers know that for a convincing universe, they need buy-in—gameplay buy-in, the “look and feel” buy-in, story buy-in which includes characters and narrative, emotional buy-in, and cultural perception buy-in. Potential players bring these abilities to the console, the computer or the hand-held. The PlayStation 3 or Wii or Xbox 360 do not cause the buy-in. But game designers know they can facilitate buy-in if they do certain things that affect how people interact with the games and with other people around those games.
While the “culture as cause” might have once been a handy way for us to grasp the differences between ourselves and different people around the globe, it’s not an assumption that helps us understand what “culture” is or how people live meaningful lives. Similarly, culture as a series of symbols, ideologies, discourses or representations provides us with an outside perspective on cultural phenomena, a way to understand differences. But this perspective does little to tell us about how those differences come to exist and what they mean on the ground. That is the world where people live, and our theory of culture should reflect that (and not just our convenient assumptions). Game developers get it, players just do it, and anthropologists will surely critique it. But understand it? That’s a bigger challenge.