The New York Times Science section has a recent article, Blind to Change, Even as It Stares Us in the Face, by Natalie Angier (you can access it without charge by signing up to their site). The article follows along some of the lines laid out by Jeremy Wolfe of Harvard Medical School, at a symposium on Art and Neuroscience.
Angier discusses Wolfe’s use of Ellsworth Kelly’s ‘Study for Colors for a Large Wall’ to illustrate what is typically called ‘change blindness’: ‘the frequent inability of our visual system to detect alterations to something staring us straight in the face.’ Kelly’s painting is an 8×8 grid of coloured squares, and Wolfe apparently showed repeatedly slides of the picture, sometimes with the colours of squares altered. When he first showed the slide, Angier writes: ‘We drank it in greedily, we scanned every part of it, we loved it, we owned it, and, whoops, time for a test.’ After the test, when the audience was thoroughly uncertain about its ability to recall even the basic patterns of colours; ‘By the end of the series only one thing was clear: We had gazed on Ellsworth Kelly’s masterpiece, but we hadn’t really seen it at all,’ Angier reports.
Change blindness is a fun phenomenon to put into research design. Researchers get away with some really amazing manipulations without their subjects recognizing them. Some experiments report that subjects fail to notice, as Angier details, whole stories of buildings disappearing or that ‘one poor chicken in a field of dancing cartoon hens had suddenly exploded.’
Dr. Wolfe also recalled a series of experiments in which pedestrians giving directions to a Cornell researcher posing as a lost tourist didn’t notice when, midway through the exchange, the sham tourist was replaced by another person altogether.
I’ve also seen discussions of experiments in which subjects watched a videotape and failed to notice a guy in a gorilla suit walking through the middle of the video because they were asked to pay attention to other details.
But is it that we’re blind to change, or that we just trust the world to remember for us, and we’re really good at getting the information we need?
The issue for Angier, and I think she’s right, is attention: ‘At what stage in the complex circuitry of sight do attentiveness and awareness arise, and what happens to other objects in the visual field once a particular object has been designated worthy of a further despairing stare?’ Attention is trickier than we might think to understand, not only because there are both bottom-up attentional effects (things that grab our attention, like sudden motion, and snake-shaped objects [as I discussed in a previous post]) and top-down attentional drivers (expectations, trained attention, looking for salient information). As Angier details, the stimulus-driven attention grabbers have been easier to study than the volitional, top-down stimulus seeking processes, but there has been recent headway on the latter
Without attention, the change blindness experiments show that we are pretty ‘blind’ to things in our visual fields, even tourists becoming new people or guys in gorilla suits.
What is the reason for the effect of attention? Angier cites what I think is the most common assumption, which I’m going to take issue with, so I’ll site it at some length:
Visual attentiveness is born of limited resources. “The basic problem is that far more information lands on your eyes than you can possibly analyze and still end up with a reasonable sized brain,” Dr. Wolfe said. Hence, the brain has evolved mechanisms for combating data overload, allowing large rivers of data to pass along optical and cortical corridors almost entirely unassimilated, and peeling off selected data for a close, careful view. In deciding what to focus on, the brain essentially shines a spotlight from place to place, a rapid, sweeping search that takes in maybe 30 or 40 objects per second, the survey accompanied by a multitude of body movements of which we are barely aware: the darting of the eyes, the constant tiny twists of the torso and neck. We scan and sweep and perfunctorily police, until something sticks out and brings our bouncing cones to a halt.
This, to me, is the typical explanation: scarce memory resources means we ‘have to have’ attention to deal with the onslaught of information. In fact, I’m not so sure of that; for example, there are people with photographic memory. I’m assuming that their brains are not of so alien in structure that this capacity is not unthinkable for every person.
The spotlight model also assumes that eyes are for conscious object perception. In fact, this is just one of the things our eyes do. In fact, we don’t remember all the objects we see, I would argue, because we don’t interact with them as objects. When I walk through my dining room, my brain isn’t going: ‘Chair. Table. Other chair. Cat. Firewood. Another chair…’ My conscious brain is going, ‘If I was that damn cat, where would I be?’ Why on earth would I want my conscious thought to have all of these objects constantly thrust into its attention? What a nightmare for trying to get anything done mentally.
In less flippant terms, object perception and recognition is must one of the tasks that I want my visual system(s) to be able to do. I also want it to alert me to sudden changes that may or may not be dangerous, allow me to navigate familiar space, keep myself upright, regulate my diurnal cycle, etc. My visual system(s) actually do a whole lot of these tasks really well, and I can see little to gain if my object recognition system suddenly took over my attention and stuffed more and more information into my memory. I don’t know if I’d ‘run out of memory,’ but I do know that I’d be taking a lot of unnecessary information on board and storing it.
I also am uncomfortable with the ‘shortage of memory’ explanation because it leads to this:
the results of change blindness studies and other experiments strongly suggest that the visual system can focus on only one or very few objects at a time, and that anything lying outside a given moment’s cone of interest gets short shrift. The brain, it seems, is a master at filling gaps and making do, of compiling a cohesive portrait of reality based on a flickering view.
First, I don’t agree with the idea that we can only focus on one or a few objects at a time; I think our visual perception system can become attuned to different sorts of things. We know that some kinds of video games can affect children’s ability to track multiple moving objects. Sports research on soccer players and my own work on capoeira players suggests that peripheral vision can be very good at tracking moving objects, even if we can’t always identify those objects or consciously identify them.
Second, I’m not sure that the brain has to compile ‘a cohesive portrait of reality based on a flickering view.’ I think it just has to know where to look in the visual field to get the information it needs. That is, the best model of the world is the world; we don’t have to create a representation in our brains of everything in the world. In fact, when we are called upon to manipulate a representation of the world in our brains, it turns out that we need the time and mental resources to re-enact a physical engagement with it (‘where did I leave my mobile phone? let me mentally retrace my steps…’). That is, why reproduce the world on the inside of the brain when it’s still out there? What change blindness experiments suggest is that we’re leaning on the world as a concrete form of memory so when it gets switched on us in unexpected ways, such as an experimenter stepping in to replace a person we were just talking to, we assume that the world is consistent and don’t necessarily bother to make an internal representation of it. It is the ‘brain makes a portrait’ that needs to be questioned; we can come up with a portrait if we need to, but it’s going to be deeply flawed, most likely, because brains don’t usually need to do that.
Another thing about this article that’s interesting from a neuroanthropological perspective is re-entrant stimuli affecting the strength of sensation. Angier does a better job of explaining than I could:
Recent studies with both macaques and humans indicate that attentiveness crackles through the brain along vast, multifocal, transcortical loops, leaping to life in regions at the back of the brain, in the primary visual cortex that engages with the world, proceeding forward into frontal lobes where higher cognitive analysis occurs, and then doubling back to the primary visual centers. En route, the initial signal is amplified, italicized and annotated, and so persuasively that the boosted signal seems to emanate from the object itself. The enhancer effect explains why, if you’ve ever looked at a crowd photo and had somebody point out the face of, say, a young Franklin Roosevelt or George Clooney in the throng, the celebrity’s image will leap out at you thereafter as though lighted from behind.
That is, perceptual signals are never (to the best of my knowledge) determined only by the strength of the stimulus from the external world. Perception, as phenomenologists have long argued, is a conjunction of the sensual world and the sensate subject; for example, visual perception is visual field plus visual attention.
Attentional biases and patterns, of course, are going to be culturally influenced. That is, they’re not going to be purely personal nor are they going to be universal. So you might be highlighting a young George Clooney in a photo when I’m noticing the extraordinary detailing on the car in the picture (and my wife is noticing the handsome warm-blood mare in the pasture in the background). That is, our attentional patterns, even when we’re not conscious of them, are driven by motivations that can be elaborated in many different ways.
I like the change blindness article; it’s fun, and change blindness effects are great for thinking about the difference between what we perceive and ‘what’s really out there.’ But to really fully understand the neurological systems responsible for perception as constructing our experiences, I think we need to get away from the ‘shortage of brain power’ explanation that so often comes up. We have to be more willing to see our perceptual skills as pretty finely tooled to what we need them to do, which they often accomplish in ways that might surprise us. For example, our visual perception system might just decide to leave the sensory world out there, where we can quickly find it, rather than making a complete model of it in our memory. It’s actually a pretty clever way to take care of perceptual things, and we wouldn’t have noticed it—even if someone had put on a gorilla suit—until folks like Wolfe showed it to us.