He Wants to Save Classics From Whiteness. Can the Field Survive
Outstanding magazine article which uses the story of one professor – Dan-el Padilla Peralta – to examine what decolonizing a discipline can mean. It is focused on one of the most white of disciplines – classics – that is often used to justify white supremacy. It also illustrates the debate around what decolonizing means and why it can matter. For most of you, it might be behind a paywall. I also post a YouTube video below of a lecture this past fall by Padilla Peralta, which has an academic delivery rather than a popular one.
To see classics the way Padilla sees it means breaking the mirror; it means condemning the classical legacy as one of the most harmful stories we’ve told ourselves. Padilla is wary of colleagues who cite the radical uses of classics as a way to forestall change; he believes that such examples have been outmatched by the field’s long alliance with the forces of dominance and oppression. Classics and whiteness are the bones and sinew of the same body; they grew strong together, and they may have to die together. Classics deserves to survive only if it can become “a site of contestation” for the communities who have been denigrated by it in the past. This past semester, he co-taught a course, with the Activist Graduate School, called “Rupturing Tradition,” which pairs ancient texts with critical race theory and strategies for organizing. “I think that the politics of the living are what constitute classics as a site for productive inquiry,” he told me. “When folks think of classics, I would want them to think about folks of color.”
African Americans remain far more likely to be incarcerated for drug crimes than white Americans. And they are four times more likely to be arrested for marijuana possession than their white counterparts. In the UK, while the inequality is not so extreme, racial bias in sentencing still exists. Last week, it was reported that black drug dealers are 1.4 times more likely to be handed immediate custodial sentences than white people convicted of similar crimes.
Hart displays the passion of the convert in attacking misconceptions of African American drug use, because, as he confesses in the book, he once “wholeheartedly believed that drugs destroyed certain black communities”.
It was visiting white friends in pleasant neighbourhoods who were engaging in the same drug use he believed led to community dysfunction that made him realise it wasn’t the drugs but the context in which they were taken that harmed people. All the same, he says it took him a long time to acknowledge to himself what his scientific research and personal experience were telling him. So why did he resist for many years the logic of his own findings on drugs such as heroin?
The question ‘Nature or nurture?’ always provoked debate. The quant types (mathematics and science majors) thought genius was due to natural gifts; parents and teachers had told them that they’d been born with a special talent for quantitative reasoning. The jocks (varsity athletes) thought exceptional accomplishment was all hard work: no pain, no gain. Coaches had taught them that their achievement was the result of endless hours of practice. Among novice political scientists, conservatives thought genius a God-given gift; liberals thought it was caused by a supportive environment. No answer? Call in the experts: readings from Plato, William Shakespeare and Charles Darwin to Simone de Beauvoir followed, but each had his or her own take.
The students hoped for something more concrete. Some wanted to know if they were already geniuses and what their futures might hold. Most wanted to know how they, too, might become a genius. They had heard that I’d studied geniuses from Louisa May Alcott to Émile Zola, and thought that I might have found the key to genius. So I asked: ‘How many of you think you already are or have the capacity to be a genius?’ Some timidly raised their hands; the class clowns did so emphatically. Next: ‘If you’re not one already, how many of you want to be a genius’? In some years, as many as three-quarters of the students raised their hands. Then I asked: ‘OK, but what exactly is a genius?’
Role of uncertainty, and how resolving that quickly really matters to be able to act in culturally guided manners, elicit social support, and do all the other successful things that help make our species successful.
These lines of evidence increasingly indicate that H. sapiens originated in Africa, although not necessarily in a single time and place. Instead it seems diverse groups of human ancestors lived in habitable regions around Africa, evolving physically and culturally in relative isolation, until climate driven changes to African landscapes spurred them to intermittently mix and swap everything from genes to tool techniques. Eventually, this process gave rise to the unique genetic makeup of modern humans…
The remains of five individuals at Jebel Irhoud exhibit traits of a face that looks compellingly modern, mixed with other traits like an elongated brain case reminiscent of more archaic humans. The remains’ presence in the northwestern corner of Africa isn’t evidence of our origin point, but rather of how widely spread humans were across Africa even at this early date [300,000 years ago].
Other very old fossils often classified as early Homo sapiens come from Florisbad, South Africa (around 260,000 years old), and the Kibish Formation along Ethiopia’s Omo River (around 195,000 years old).
I wanted to talk first about the quote from your study that reads “the marginalised non-Western lands and their peoples leaving the order of superior and inferior more or less unchanged through the history of the discipline.” I found that fascinating and wanted to have you expand on that a little bit.
That came about from my co-author. She’s been working in South Africa for the last 20 some odd years, and I gave a talk about how the Asian fossil record has been represented, and she was frustrated because she pointed out that no matter how the narrative goes, Africa is always at the bottom. If it’s the site of human origins, they’re primitive. If it was the last place to become human, they’re primitive. So they can’t win. That’s actually how that came about was her frustration and observation that the starting point was the primitiveness of Africans and the elevation of Europeans. The irony is that novelties in human evolution—the youngest fossil record for our species and some of the latest developments—were in Europe, and somehow, that’s still the pinnacle of our achievement. Everything is then framed as “these various places had art, but it all came together in Europe”, whereas before, in the 1940s and ’50s, it was “it all started in Europe and no one else became civilised until after we developed art”.
The inevitable process of “democratisation” touted by all the platforms as evidence of their own socially progressive nature, was often the result of simple arithmetic. In cases like WeWork, the maths did not even add up. Whether Robinhood, which has now urgently raised an extra $1bn, will be luckier remains to be seen.
For most of these companies, the sweet promises of “democratisation” have made such maths irrelevant, at least in the short term. This explains how the tech industry has emerged as the leading purveyor of populism around the globe.
This may seem an overstatement. While we tend to reserve the dreaded P word for the Bannons, the Orbáns and the Erdoğans of this world, can we think of Bezos or Zuckerberg – and the stock-trading Robinhood army – in those terms?
We can and we should. With everyone’s eyes fixed on Trump-style populism – primitive, toxic, nativist – we have completely missed the platforms’ role in the emergence of another, rather distinct type of populism: sophisticated, cosmopolitan, urbane. Originating in Silicon Valley, this “platform populism” has advanced by disrupting hidden, reactionary forces that stand in the way of progress and “democratisation” – all by unleashing the powers of digital technologies.
In his newest book, Work: A Deep History, From the Stone Age to the Age of Robots, [anthropologist] James Suzman, Ph.D., goes back much, much further than that. Ten thousand years, in fact, to the agricultural revolution, and to the beginning of food insecurity. In the case of drought or pests, farms would be destroyed and famine would ensue. Effectively, this was the beginning of the complementary notions of scarcity and productivity: You can never have enough, and so you should always be working to produce more. Sound familiar?
To explain why and how this happened, Suzman draws heavily on the decades-long work he has done studying the Bushmen of southern Africa. In particular, the Ju/’hoansi, a tribe whose members, up until the latter half of the 20th century, were still hunting and gathering like their ancestors some 200,000 years before. (To pronounce Ju/’hoansi without the click they use, Suzman advises combining the French “jus” with “waahsi.”)
By juxtaposing this anomalous foraging community—where members worked only 15 hours a week—with the farming societies that followed, Suzman highlights the shifts that ushered in the ideas that now define modern work. It’s not just that we became obsessed with productivity, but that we fundamentally changed our relationship to things like time, history, the land, and one another.
You can hardly do experiments on consciousness without having first defined it. But that’s already difficult because we use the word in several ways. Humans are conscious beings, but we can lose consciousness, for example under anesthesia. We can say we are conscious of something — a strange noise coming out of our laptop, say. But in general, the quality of consciousness refers to a capacity to experience one’s existence rather than just recording it or responding to stimuli like an automaton. Philosophers of mind often refer to this as the principle that one can meaningfully speak about what it is to be “like” a conscious being — even if we can never actually have that experience beyond ourselves…
But to Koch, the argument that all of cognition, including consciousness, is merely a form of computation “embodies the dominant myth of our age: that it’s just an algorithm, and so is just a clever hack away.” According to this view, he said, “very soon we’ll have clever machines that model most of the features that the human brain has and thereby will be conscious.”
He has been developing a competing theory in collaboration with its originator, the neuroscientist Giulio Tononi of the University of Wisconsin-Madison. They say that consciousness is not something that arises while turning inputs into outputs but rather an intrinsic property of the right kind of cognitive network, one that has specific features in its architecture. Tononi christened this view integrated information theory (IIT).
In contrast to GWT, which starts by asking what the brain does to create the conscious experience, IIT begins instead with the experience. “To be conscious is to have an experience,” Tononi said. It doesn’t have to be an experience about anything, although it can be; dreams, or some “blank mind” states attained by meditation also count as conscious experiences. Tononi has sought to identify the essential features of these experiences: namely, that they are subjective (they exist only for the conscious entity), structured (their contents relate to one another: “the blue book is on the table”), specific (the book is blue, not red), unified (there is only one experience at a time) and definitive (there are bounds to what the experience contains). From these axioms, Tononi and Koch claim to have deduced the properties that a physical system must possess if it is to have some degree of consciousness.
The philosopher Yuk Hui, a native of Hong Kong who now teaches in Germany, thinks that Heidegger is the most profound of recent Western thinkers on technology — but also that it is necessary to “go beyond Heidegger’s discourse on technology.” In his exceptionally ambitious book The Question Concerning Technology in China (2016) and in a series of related essays and interviews, Hui argues, as the title of his book suggests, that we go wrong when we assume that there is one question concerning technology, the question, that is universal in scope and uniform in shape. Perhaps the questions are different in Hong Kong than in the Black Forest. Similarly, the distinction Heidegger draws between ancient and modern technology — where with modern technology everything becomes a mere resource — may not universally hold.
Hui explores, for instance, Kant’s notion of the cosmopolitan, and the related role of print technology. A central concept in Enlightenment models of rationality, the cosmopolitan is the ideal citizen of the world engaged in public reasoning, and Kant believed that a “universal cosmopolitan condition” would one day be the natural outcome of history. But Kant’s understanding of what that means is thoroughly entangled with the rise and expansion of print culture. It is directly through print culture that the “Republic of Letters,” the very epitome of cosmopolitanism as Kant knew it, is formed. But, then, what might a cosmopolitan be within a society whose print culture is either nonexistent or radically other than the one Enlightenment thinkers knew?
Contemporary understandings of paleoanthropological data illustrate that the search for a line defining, or a specific point designating, “modern human” is problematic. Here we lend support to the argument for the need to look for patterns in the paleoanthropological record that indicate how multiple evolutionary processes intersected to form the human niche, a concept critical to assessing the development and processes involved in the emergence of a contemporary human phenotype. We suggest that incorporating key elements of the Extended Evolutionary Synthesis (EES) into our endeavors offers a better and more integrative toolkit for modeling and assessing the evolution of the genus Homo. To illustrate our points, we highlight how aspects of the genetic exchanges, morphology, and material culture of the later Pleistocene complicate the concept of “modern” human behavior and suggest that multiple evolutionary patterns, processes, and pathways intersected to form the human niche.
And here is the Twitter thread on paleoanthropology and the extended synthesis from one of the authors, Agustin Fuentes.
British science journalist Angela Saini tells this story in her exceptional and damning book, Superior: The Return of Race Science. “Race was the entire premise upon which they were doing their research, but they were unable to tell her what it was,” writes Saini. “Their work instead seemed to rest upon a hope that if they just persisted, they would eventually come to find meaning in these categories. What they couldn’t yet define would then be defined. Somehow it would become real.”
In the 16 years since the anthropologist made her observations, scientists have still not found any meaningful biological definition of race. All human genomes are 99.5 percent identical, and although it’s true that the remaining 0.5 percent can vary in ways that correlate with geographical ancestry, these correlations do not strictly map to racial categories. If you hand a scientist your genome, she might be able to tell you something about the geographical distribution of your ancestors, but she cannot tell you what race you are. There’s simply nothing in the genome that’s an unambiguous marker of race. And yet, many scientists and doctors continue to use race as if it’s a meaningful biological category.