We created a network to tell these stories. Help us share them. Submit your site today.
The African American Burial Ground & Remembering Project (AABGP) is a collaboration between the University of South Florida and local artists working to address black cemetery erasures in the Tampa Bay area. Our network has partnered with the AABGP team to coordinate research and advocacy efforts which share the same goal: to preserve black cemeteries by telling their stories
Love is as love does. “There’s no reason to think it would be much different for humans than nonhumans,” says Marc Bekoff, author of The Emotional Lives of Animals. “I’ve known mourning doves”—a species closely related to pigeons—“who were more in love than a lot of the people I’ve known.”
Energy bar and shoe companies have profited from products inspired by these “superrunners.” Traditional Rarámuri ways of life are under threat with the encroachment of mining, logging, climate change, organized crime, and the arrival of new technology, including cellphones. And misconceptions have swirled around this community.
Against this backdrop, Lieberman and his colleagues document how Rarámuri running remains intimately interconnected with the community’s culture, religion, and social life. He and his colleagues scientifically examine how the runners’ physiology does—and does not—contribute to their remarkable stamina. In the process, the authors debunk widely believed stereotypes and examine the deep spiritual significance of Rarámuri racing.
Little Foot lived roughly half-way between modern times and the estimated age of a human-chimp common ancestor, says paleobiologist David Green of Campbell University in Buies Creek, N.C., a member of Carlson’s team. If that ancient ancestral creature was about the size of a chimp, as many researchers suspect, shoulders resembling those of gorillas would have supported slow but competent climbing, Green says. Gorillas spend much of the time knuckle-walking on the ground. These apes climb trees with all four limbs, reaching up with powerful shoulders and arms to pull themselves along.
“The maintenance of a gorilla-like shoulder in Little Foot offers clues that climbing remained vital for early [hominids],” Green says. It’s possible, he added, that Little Foot’s shoulder design represented “evolutionary baggage” among hominids evolving bodies more suited to upright walking.
Helen Sword – Books on Writing.
Sword is the author of Stylish Academic Writing, among other works. Here are some of her recommendations.
Over the last decades or so, empirical studies of perception, action, learning, and development have revealed that participants vary in what variable they detect and often rely on nonspecifying variables. This casts doubt on the Gibsonian conception of information as specification. It is argued that a recent ecological conception of information has solved important problems, but insufficiently explains what determines the object of perception.
Drawing on recent work on developmental systems, we sketch the outlines of an alternative conception of perceptual information. It is argued that perceptual information does not reside in the ambient arrays; rather, perceptual information is a relational property of patterns in the array and perceptual processes. What a pattern in the ambient flow informs about depends on the perceiver who uses it. We explore the implications of this alternative conception of information for the ecological approach to perception and action.
Survival prompts organisms to prepare adaptive behavior in response to environmental and social threat. However, what are the specific features of the appearance of a conspecific that trigger such adaptive behaviors? For social species, the prime candidates for triggering defense systems are the visual features of the face and the body. We propose a novel approach for studying the ability of the brain to gather survival-relevant information from seeing conspecific body features. Specifically, we propose that behaviorally relevant information from bodies and body expressions is coded at the levels of midlevel features in the brain. These levels are relatively independent from higher-order cognitive and conscious perception of bodies and emotions. Instead, our approach is embedded in an ethological framework and mobilizes computational models for feature discovery.
Lepper recalled how Ross found inspiration from a close examination of paradoxes and peculiarities in everyday life. This made it “easy for others to study the applications of his ideas to real-world problems and settings outside the laboratory,” Lepper said.
When his Stanford tenure seemed uncertain in 1977, Ross wrote what was essentially his research statement about all his work up to that point in an effort to prove his mettle. It was in that paper, “The Intuitive Psychologist and His Shortcomings: Distortions in the Attribution Process,” that Ross coined the term “fundamental attribution error,” referring to the failure to acknowledge the importance of the situation in determining behavior, and which is one of his most lasting contributions to the field, among many. The paper successfully secured Ross’s tenure and has since become one of the most quoted articles in all of social psychology.
The language commonly used in human genetics can inadvertently pose problems for multiple reasons. Terms like “ancestry”, “ethnicity”, and other ways of grouping people can have complex, often poorly understood, or multiple meanings within the various fields of genetics, between different domains of biological sciences and medicine, and between scientists and the general public. Furthermore, some categories in frequently used datasets carry scientifically misleading, outmoded or even racist perspectives derived from the history of science.
Here, we discuss examples of problematic lexicon in genetics, and how commonly used statistical practices to control for the non-genetic environment may exacerbate difficulties in our terminology, and therefore understanding. Our intention is to stimulate a much-needed discussion about the language of genetics, to begin a process to clarify existing terminology, and in some cases adopt a new lexicon that both serves scientific insight, and cuts us loose from various aspects of a pernicious past.
In everyday life humans regularly seek participation in highly complex and pleasurable experiences such as music listening, singing, or playing, that do not seem to have any specific survival advantage. The question addressed here is to what extent dopaminergic transmission plays a direct role in the reward experience (both motivational and hedonic) induced by music. We report that pharmacological manipulation of dopamine modulates musical responses in both positive and negative directions, thus showing that dopamine causally mediates musical reward experience.
We, the undersigned associations and organizations, state our firm opposition to a spate of legislative proposals being introduced across the country that target academic lessons, presentations, and discussions of racism and related issues in American history in schools, colleges and universities. These efforts have taken varied shape in at least 20 states, but often the legislation aims to prohibit or impede the teaching and education of students concerning what are termed “divisive concepts.”
These divisive concepts as defined in numerous bills are a litany of vague and indefinite buzzwords and phrases including, for example, “that any individual should feel or be made to feel discomfort, guilt, anguish, or any other form of psychological or emotional distress on account of that individual’s race or sex.” These legislative efforts are deeply troubling for numerous reasons.
First, these bills risk infringing on the right of faculty to teach and of students to learn. The clear goal of these efforts is to suppress teaching and learning about the role of racism in the history of the United States.
The Writer’s Diet Test, a Helen Sword initiative – she has aimed to improve academic writing in recent years, including the well-received Stylish Academic Writing.
Is your writing flabby or fit? Enter a text sample of at least 100 words, then click “take the test” to see your diagnosis. (Don’t like the diet and fitness theme? Click the Settings wheel to change it). To shape up your sentences and sharpen your style, start your customized writing workout here.
In a new book being released this August, Australian philosopher Chris Letheby tackles the comforting-delusion quandary. In The Philosophy of Psychedelics, he asks whether we should care if psychedelics provide a comforting delusion if that leads to less suffering. Perhaps more importantly, Letheby questions whether or not it really is the mystical experiences causing the dramatic outcomes seen in people who undergo this therapy. They may not be the whole story.
Motherboard talked to Letheby about mysticism, the importance of truth and knowledge, and why the field of psychedelics, in particular, needs philosophy to help guide it.
Motivational interviewing (MI) is an American behavioral health intervention that has spread dramatically across professional fields, including counseling psychology, corrections, dentistry, nursing, nutrition, primary-care medicine, safe-water interventions, and social work. This article explores how the central methodological principles of American pragmatism—if understood and learned as MI—take root among a group of contemporary American helping professionals.
More specifically, the article shows how professional training in MI inculcates: (1) a steadfast focus on the immediate consequences of one’s acts rather than floating or abstract conceptions of the true, the good, or the right; and (2) an investment in a highly reflexive mode of knowledge acquisition, which relinquishes the certainty of positivist explanations and embraces doubt. Indeed, learning how not to know is part and parcel of becoming an American pragmatist, and this article details the labor, costs, and rewards of adopting a pragmatic, or (in)expert, sensibility…
Taking the U-Haven training as its central ethnographic ground, this article explores how the central methodological principles of American Pragmatism—if understood and learned as MI—take root among a group of contemporary American helping professionals.4 More specifically, I will show how MI training inculcates: (1) a steadfast focus on the immediate consequences of one’s acts rather than floating or abstract conceptions of the true, the good, or the right, and (2) an investment in a highly reflexive mode of knowledge acquisition, which relinquishes the certainty of positivist explanations and embraces doubt. By way of these pragmatic principles—which proponents of MI take to be simultaneously ethical and technical—MI offers an alternative to both a deductive logic, which finds the roots of problems and (therefore) cues for solutions in the interiors of suffering people, and the focus on measurable “clinical outcomes,” so firmly embedded in the contemporary culture of social and health service provision.
Should Darwin Be Cancelled? | Robert Wright & Agustín Fuentes | The Wright Show
Agustín Fuentes on what he was and wasn’t saying in his controversial Science piece on Darwin
I want to be very clear, I acknowledge that humans are exquisitely social, and that we have specialized mechanisms for social cognition and interaction. We are influenced by the elegant work of Cecilia Heyes, who argues that much of what we call social cognition across species is actually driven by domain-general precision-weighted inference mechanisms [Heyes and Pearce 2015]. Put simply, we learn about other people as if they were cues with a mean expected value, and a reliability [Heyes et al. 2020] (this could be a mechanism through which we give testimony about others testimony).
Evidence for this type of view is extensive. Some of the most compelling comes from developmental work in humans. Human infants’ domain-general associative learning abilities portend their social cognition and behavior later in life [Reeb-Sutherland et al. 2012]. I would like to suggest that much of social cognition involves ill-posed and recursive inference problems. These are hard problems. They tax the inference machinery extensively. Any insults to that inference machinery will impair social inference (as well as inferences more broadly). This would be consistent with our observations relating paranoia in patients, on the continuum, and perhaps even in rodents, to non-social precision-weighted updating [Reed et al. 2020]. We still need to get from our non-social deficit to an extremely social belief.
Briefly, after Sullivan and colleagues, I think that having an enemy or persecutor can actually be reassuring. Perceiving that enemy as a source of misfortune increases the sense that the world is predictable and controllable, that risks are not randomly distributed [Sullivan et al. 2010] – blaming enemies might mollify the uncertainty that characterizes high paranoia, delusions, and psychosis more broadly. In settings where a sense of control is reduced, people will compensate by attributing exaggerated influence to an enemy, even when the enemy’s influence is not obviously linked to those hazards.
Increasing evidence suggests that cultural influences on brain activity are associated with multiple cognitive and affective processes. These findings prompt an integrative framework to account for dynamic interactions between culture, behavior, and the brain. We put forward a culture–behavior–brain (CBB) loop model of human development that proposes that culture shapes the brain by contextualizing behavior, and the brain fits and modifies culture via behavioral influences. Genes provide a fundamental basis for, and interact with, the CBB loop at both individual and population levels. The CBB loop model advances our understanding of the dynamic relationships between culture, behavior, and the brain, which are crucial for human phylogeny and ontogeny. Future brain changes due to cultural influences are discussed based on the CBB loop model.
Bronfenbrenner’s bioecological theory of human development is one of the most widely known theoretical frameworks in human development. In spite of its popularity, the notion of culture within the macrosystem, as a separate entity of everyday practices and therefore microsystems, is problematic. Using the theoretical and empirical work of Rogoff and Weisner, and influenced as they are by Vygotsky’s sociocultural perspective, we reconceptualize Bronfenbrenner’s model by placing culture as an intricate part of proximal development processes.
In our model, culture has the role of defining and organizing microsystems and therefore becomes part of the central processes of human development. Culture is an ever changing system composed of the daily practices of social communities (families, schools, neighborhoods, etc.) and the interpretation of those practices through language and communication. It also comprises tools and signs that are part of the historical legacy of those communities, and thus diversity is an integral part of the child’s microsystems, leading to culturally defined acceptable developmental processes and outcomes.
When I returned from the wild, my Zen-like buzz hung around for months. To understand what was happening, I met with Rachel Hopman, Ph.D., a neuroscientist at Northeastern University. She told me about the nature pyramid. Think of it like the food pyramid, except that instead of recommending you eat this many servings of vegetables and this many of meat, it recommends the amount of time you should spend in nature to reduce stress and be healthier. Learn and live by the 20-5-3 rule.
20 Minutes. That’s the amount of time you should spend outside in nature, like a neighborhood park, three times a week. Hopman led a new study that concluded that something as painless as a 20-minute stroll through a city botanical garden can boost cognition and memory as well as improve feelings of well-being. “But,” she said, “we found that people who used their cell phone on the walk saw none of those benefits.”
One such curiosity concerns synapses, connection spots where signals move between nerve cells. Usually, most message-sending axons touch a message-receiving dendrite just once. In the new dataset, about 90 percent of the connections were these one-hit contacts. Some pairs of cells have slightly more contacts. But every so often, researchers spotted cells that connect multiple times, including one pair that were linked by a whopping 19 synapses.
Multiple connections have been spotted in mouse brains, though not quite as abundantly as in this human sample. And fly brains can also have many connections between cells, though they’re more dispersed than the newly described human connections, says neuroscientist Pat Rivlin of Howard Hughes Medical Institute’s Janelia Research Campus in Ashburn, Va. There, Rivlin works on the FlyEM Project, which aims to create detailed maps of the fruit fly nervous system.
The large dataset on the human brain provides a breakdown of just how common these types of connections are, says Reid. And that raises the question of what these extraordinarily strong synapses might be doing in the brain.
As a biological anthropologist, I find the “men are better athletes” stereotype—which turns up in so many places, in so many ways—particularly frustrating.
Athletic performance differences can be caused by all manner of things across four broad categories: anatomical (physical features such as height), physiological (functional factors like the body’s ability to deliver oxygen to muscles), psychological, and socioeconomic (such as access to equipment and training knowledge). A number of myths and misconceptions exist within each of these categories that tend to ascribe overwhelming advantages to men.
I am here to dispel those myths and misconceptions.
With the explosion of neuroimaging, differences between male and female brains have been exhaustively analyzed. Here we synthesize three decades of human MRI and postmortem data, emphasizing meta-analyses and other large studies, which collectively reveal few reliable sex/gender differences and a history of unreplicated claims.
Males’ brains are larger than females’ from birth, stabilizing around 11 % in adults. This size difference accounts for other reproducible findings: higher white/gray matter ratio, intra- versus interhemispheric connectivity, and regional cortical and subcortical volumes in males.
But when structural and lateralization differences are present independent of size, sex/gender explains only about 1% of total variance. Connectome differences and multivariate sex/gender prediction are largely based on brain size, and perform poorly across diverse populations.
Task-based fMRI has especially failed to find reproducible activation differences between men and women in verbal, spatial or emotion processing due to high rates of false discovery. Overall, male/female brain differences appear trivial and population-specific. The human brain is not “sexually dimorphic.”
Background: We are witnessing renewed debates regarding definitions and boundaries of human gender/sex, where lines of genetics, gonadal hormones, and secondary sex characteristics are drawn to defend strict binary categorizations, with attendant implications for the acceptability and limits of gender identity and diversity.
Aims: Many argue for the need to recognize the entanglement of gender/sex in humans and the myriad ways that gender experience becomes biology; translating this theory into practice in human biology research is essential. Biological anthropology is well poised to contribute to these societal conversations and debates. To do this effectively, a reconsideration of our own conceptions of gender/sex, gender identity, and sexuality is necessary.
Methods: In this article, we discuss biological variation associated with gender/sex and propose ways forward to ensure we are engaging with gender/sex diversity. We base our analysis in the concept of “biological normalcy,” which allows consideration of the relationships between statistical distributions and normative views. We address the problematic reliance on binary categories, the utilization of group means to represent typical biologies, and document ways in which binary norms reinforce stigma and inequality regarding gender/sex, gender identity, and sexuality.
Discussion and Conclusions: We conclude with guidelines and methodological suggestions for how to engage gender/sex and gender identity in research. Our goal is to contribute a framework that all human biologists can use, not just those who work with gender or sexually diverse populations. We hope that in bringing this perspective to bear in human biology, which novel ideas and applications will emerge from within our own discipline.
The use of heart rate variability (HRV) in research has been greatly popularized over the past decades due to the ease and affordability of HRV collection, coupled with its clinical relevance and significant relationships with psychophysiological constructs and psychopathological disorders. Despite the wide use of electrocardiograms (ECG) in research and advancements in sensor technology, the analytical approach and steps applied to obtain HRV measures can be seen as complex. Thus, this poses a challenge to users who may not have the adequate background knowledge to obtain the HRV indices reliably.
To maximize the impact of HRV-related research and its reproducibility, parallel advances in users’ understanding of the indices and the standardization of analysis pipelines in its utility will be crucial. This paper addresses this gap and aims to provide an overview of the most up-to-date and commonly used HRV indices, as well as common research areas in which these indices have proven to be very useful, particularly in psychology. In addition, we also provide a step-by-step guide on how to perform HRV analysis using an integrative neurophysiological toolkit, NeuroKit2.
That library includes two fast-paced BBC television series, Sherlock and Merlin. In one study published last year, Hasson and his colleagues had participants lie in an fMRI scanner while watching part of an episode of one of the two shows, which were chosen because they were engaging and had twisting plots likely to be easily remembered. Later, one person was recorded recounting the episode while being scanned again, this time in the dark. Then, people who hadn’t seen the shows listened to that recording. These participants were scanned as they mentally constructed the show from what they heard.
On the face of it, watching a video clip, recalling it later, and imagining it from someone else’s description are very different cognitive processes. But Hasson found that the brain patterns across those processes were similar in certain higher-order areas. That trend was scene-specific, so that (spoiler alert!) when Sherlock gets into a cab driven by the man he has realized is responsible for several murders disguised as suicides, there were shared patterns of brain activation in study participants regardless of whether they were watching, remembering, or imagining that scene.
The experiment also revealed something about memory. The more similar the patterns in the brain of the person who originally viewed the episode and the person who mentally constructed it when listening to the description, the better the transfer of memories from the speaker to the listener, as measured by a separate comprehension test. The findings suggest that the same areas used to recall and reconstruct a memory are involved in the construction of someone else’s memory in our imagination. “Perhaps the key function of memory is not to represent the past, but to be used as a tool to share our knowledge with others and predict the future,” Hasson says. He expects the results would be even more pronounced in real-time or face-to-face conversations.
It is unclear whether posttraumatic stress disorder (PTSD) is a universal response to violence found everywhere or if it is culturally specific to certain parts of the world. Zefferman and Mathew interviewed warriors among the Turkana, a population of subsistence pastoralists living in Kenya. Compared with a sample of American military servicemembers who had been treated for PTSD, Turkana were equally likely to experience reactive symptoms such as hypervigilance, which may be more sensitive to experiences of danger, but they were less likely to experience depressive symptoms such as detachment and loss of interest, which may be related to feelings of moral violation. These findings suggest that symptoms of PTSD directly tied to dangers of combat may be universal, whereas the symptoms tied to the morality of combat may be more culturally variable.
What we are given is taken away,
but we manage to keep it secretly.
We lose everything, but make harvest
of the consequence it was to us. Memory
builds this kingdom from the fragments
and approximation. We are gleaners who fill
the barn for the winter that comes on.
Still, cohesion would have been essential, and this is the core of Slingerland’s argument: Bonding is necessary to human society, and alcohol has been an essential means of our bonding. Compare us with our competitive, fractious chimpanzee cousins. Placing hundreds of unrelated chimps in close quarters for several hours would result in “blood and dismembered body parts,” Slingerland notes—not a party with dancing, and definitely not collaborative stone-lugging.
Human civilization requires “individual and collective creativity, intensive cooperation, a tolerance for strangers and crowds, and a degree of openness and trust that is entirely unmatched among our closest primate relatives.” It requires us not only to put up with one another, but to become allies and friends.
As to how alcohol assists with that process, Slingerland focuses mostly on its suppression of prefrontal-cortex activity, and how resulting disinhibition may allow us to reach a more playful, trusting, childlike state…
Sayette, for his part, has spent much of the past 20 years trying to get to the bottom of a related question: why social drinking can be so rewarding. In a 2012 study, he and Creswell divided 720 strangers into groups, then served some groups vodka cocktails and other groups nonalcoholic cocktails. Compared with people who were served nonalcoholic drinks, the drinkers appeared significantly happier, according to a range of objective measures. Maybe more important, they vibed with one another in distinctive ways. They experienced what Sayette calls “golden moments,” smiling genuinely and simultaneously at one another. Their conversations flowed more easily, and their happiness appeared infectious. Alcohol, in other words, helped them enjoy one another more.
Uri Hasson explores how brain activity is shared between listeners of the same story, and how those shared neural responses are coupled to and shaped by the neural activity in the storyteller’s brain.
How does your brain change with each story that you hear? How can storytelling shape your memories? In this talk, Dr. Uri Hasson explores how brain activity is shared between listeners of the same story, and how those shared neural responses are coupled to and shaped by the neural activity in the storyteller’s brain.
After previous studies with animal subjects found that new experiences are beneficial for brain development, a group of researchers attempted a similar experiment in humans. They enlisted subjects in New York City and Miami and tracked GPS data on their phones, while texting them every other day to ask about their mood. The study was conducted pre-pandemic and published in Nature Neuroscience in May 2020.
“What we found was that for every person, on days when they displayed greater exploration, greater “roaming entropy”, they reported feeling happier. It’s as simple as that,” said co-author Dr. Aaron Heller of the University of Miami. His team then did a more nuanced analysis in which they collected how many new places their subjects visited. “The experience of novelty, or going to places you had never been before, actually seemed to have an even larger association with positive emotion on that day.”
It has been suggested that the human species may be undergoing an evolutionary transition in individuality (ETI). But there is disagreement about how to apply the ETI framework to our species, and whether culture is implicated as either cause or consequence. Long-term gene–culture coevolution (GCC) is also poorly understood. Some have argued that culture steers human evolution, while others proposed that genes hold culture on a leash. We review the literature and evidence on long-term GCC in humans and find a set of common themes.
First, culture appears to hold greater adaptive potential than genetic inheritance and is probably driving human evolution. The evolutionary impact of culture occurs mainly through culturally organized groups, which have come to dominate human affairs in recent millennia. Second, the role of culture appears to be growing, increasingly bypassing genetic evolution and weakening genetic adaptive potential. Taken together, these findings suggest that human long-term GCC is characterized by an evolutionary transition in inheritance (from genes to culture) which entails a transition in individuality (from genetic individual to cultural group). Thus, research on GCC should focus on the possibility of an ongoing transition in the human inheritance system.
Contradictions constitute one fundamental aspect of human life. Humans are steeped in contradictory thoughts, feelings, and attitudes. In this debate, five anthropologists adopt an individual-centered and phenomenological perspective on contradictions. How can one live with them? How to describe them from an anthropological point of view? Should we rethink our dear notion of the “social agent” through that of contradiction?
Seven elements that anthropology can provide: (1) Gap between Here & Elsewhere, (2) Question Universal Claims, (3) Border between Humanities & Social Sciences, (4) Addressing Rise of the Financiers, (5) Dispelling Savage Illusions, (6) Relations & Processes, not Essences
Psychiatry has long debated whether the causes of mental illness can be better explained by reductionist or pluralistic accounts. Although the former relies on commonsense scientific bottom-up causal models, the latter (which typically include environmental, psychological, and/or socio-cultural risk factors) requires top-down causal processes often viewed with skepticism, especially by neuroscientists. We begin with four clinical vignettes which illustrate self-interventions wherein high-order psychological processes (e.g. religious beliefs or deep interpersonal commitments) appear to causally impact the risk for or the course of psychiatric/behavioral disorders.
We then propose a model for how to understand this sort of top-down self-causation. Our model relies centrally on the concept of a control variable which, like a radio tuning dial, can implement a series of typically unknown physical processes to obtain the desired ends. We set this control variable in the context of an interventionist account of causation that assumes that a cause (C) produces an effect (E) when intervening on C (by manipulating it) is associated with a change in E. We extend this framework by arguing that certain psychological changes can result from individuals intervening on their own mental states and/or selection of environments. This in turn requires a conception of the self that contains mental capacities that are at least partially independent of one another. Although human beings cannot directly intervene on the neurobiological systems which instantiate risk for psychiatric illness, they can, via control variables at the psychological level, and/or by self-selection into protective environments, substantially alter their own risk.
“Mr. A,” a 24-year-old man, presents for evaluation of worsening depression. He describes a history of depression since adolescence, although he notes that he suffered a troubled childhood, including emotional neglect. He believes a recent breakup and having been denied a promotion precipitated this episode. “I’m sleeping all the time, and my body feels heavy,” he adds. He also reports increased appetite, weight gain, and “urges to cut, which I have not done in years.” However, he remains social and actively involved in several hobbies. He discontinued bupropion and escitalopram in the past because of “terrible headaches and irritability.” Initially, you consider starting lamotrigine. However, your office recently implemented a clinical decision support system that recommends a trial of phenelzine. The patient’s symptoms remit entirely on the medication suggested by the system. Curious as to how the system decided on this treatment, you download several papers on its development.
The period between 600 and 400 ka is a critical phase for human evolution in Europe. The south and northwest saw a dramatic increase in sites, the spread of handaxe technology alongside bone and wooden tool manufacture, efficient hunting techniques, and the use of fire. Lithic assemblages show considerable variation, including the presence/absence of handaxes and tool morphology.
To explain this variation, we propose the Cultural Mosaic Model, which suggests that there is a range of expressions of the Acheulean, with local resources being instrumental in creating distinct material cultures with or without handaxes. We argue that if typologically and technologically distinct assemblage types are regionally distributed, chronologically separated, and persistent over time, then they are unlikely to be caused purely by raw material constraints or functional variation but rather reflect populations with different material cultures…
We suggest that group expression through material culture was an important stage in social development by promoting group cohesion, larger group size, better cooperation, improved knowledge transfer, and enabling populations to survive in larger foraging territories in northern Europe.
In the light of these discoveries, D’Errico has developed a scenario to explain how number systems might have arisen through the very act of producing such artefacts. His hypothesis is one of only two published so far for the prehistoric origin of numbers.
It all started by accident, he suggests, as early hominins unintentionally left marks on bones while they were butchering animal carcasses. Later, the hominins made a cognitive leap when they realized that they could deliberately mark bones to produce abstract designs — such as those seen on an approximately 430,000-year-old shell found in Trinil, Indonesia6. At some point after that, another leap occurred: individual marks began to take on meaning, with some of them perhaps encoding numerical information. The Les Pradelles hyena bone is potentially the earliest known example of this type of mark-making, says D’Errico. He thinks that with further leaps, or what he dubs cultural exaptations, such notches eventually led to the invention of number signs such as 1, 2 and 3.
On the basis of recent advancements in both neuroscience and archaeology, we propose a plausible biocultural mechanism at the basis of cultural evolution. The proposed mechanism, which relies on the notions of cultural exaptation and cultural neural reuse, may account for the asynchronous, discontinuous, and patchy emergence of innovations around the globe. Cultural exaptation refers to the reuse of previously devised cultural features for new purposes. Cultural neural reuse refers to cases in which exposure to cultural practices induces the formation, activation, and stabilization of new functional and/or structural brain networks during the individual lifespan.
The invention of writing is interpreted as a case of cultural exaptation of previous devices to record information, in use since at least the Early Later Stone Age and the beginning of the Upper Paleolithic (44,000 years before present). The measurable changes in brain structure and functioning caused by learning to read are proposed as an exemplar case of cultural neural reuse. It is argued that repeated cycles of cultural exaptation, development of appropriate strategies of cultural transmission, and ensuing cultural neural reuse represent the fundamental mechanism that has regulated the cultural evolution of our lineage.
McDaniel was both a potential victim and a potential perpetrator, and the visitors on his porch treated him as such. A social worker told him that he could help him if he was interested in finding assistance to secure a job, for example, or mental health services. And police were there, too, with a warning: from here on out, the Chicago Police Department would be watching him. The algorithm indicated Robert McDaniel was more likely than 99.9 percent of Chicago’s population to either be shot or to have a shooting connected to him. That made him dangerous, and top brass at the Chicago PD knew it. So McDaniel had better be on his best behavior.
The idea that a series of calculations could predict that he would soon shoot someone, or be shot, seemed outlandish. At the time, McDaniel didn’t know how to take the news.
But the visit set a series of gears in motion. This Kafka-esque policing nightmare — a circumstance in which police identified a man to be surveilled based on a purely theoretical danger — would seem to cause the thing it predicted, in a deranged feat of self-fulfilling prophecy.
Humans often seek information to minimize the pervasive effect of uncertainty on decisions. Current theories explain how much knowledge people should gather before a decision, based on the cost–benefit structure of the problem at hand. Here, we demonstrate that this framework omits a crucial agent-related factor: the cognitive effort expended while collecting information. Using an active sampling model, we unveil a speed–efficiency trade-off whereby more informative samples take longer to find. Crucially, under sufficient time pressure, humans can break this trade-off, sampling both faster and more efficiently.
Computational modelling demonstrates the existence of a cost of cognitive effort which, when incorporated into theoretical models, provides a better account of people’s behaviour and also relates to self-reported fatigue accumulated during active sampling. Thus, the way people seek knowledge to guide their decisions is shaped not only by task-related costs and benefits, but also crucially by the quantifiable computational costs incurred.
Dr. Krakauer is currently John C. Malone Professor of Neurology, Neuroscience, and Physical Medicine and Rehabilitation, and Director of the Brain, Learning, Animation, and Movement Lab (www.BLAM-lab.org) at The Johns Hopkins University School of Medicine. His areas of research interest are:
(1) Experimental and computational studies of motor control and motor learning in humans
(2) Tracking long-term motor skill learning and its relation to higher cognitive processes such as decision-making.
(3) Prediction of motor recovery after stroke
(4) Mechanisms of spontaneous motor recovery after stroke in humans and in mouse models
(5) New neuro-rehabilitation approaches for patients in the first 3 months after stroke.
Dr. Krakauer is also co-founder of the video gaming company Max and Haley, and of the creative engineering Hopkins-based project named KATA. KATA and M&H are both predicated on the idea that animal movement based on real physics is highly pleasurable and that this pleasure is hugely heightened when the animal movement is under the control of our own movements. A simulated dolphin and other cetaceans developed by KATA has led to a therapeutic game, interfaced with an FDA-approved 3D exoskeletal robot, which is being used in an ongoing multi-site rehabilitation trial for early stroke recovery. Dr. Krakauer’s book, “Broken Movement: The Neurobiology of Motor Recovery after Stroke” has been published by the MIT Press.
In this tangle between very powerful institutions and very powerful cultural logics, there are serious problems that are deeply rooted. The great democratic revolutions of Western Europe and North America were rooted in the intellectual and cultural revolution of Enlightenment; the Enlightenment underwrote those political transformations. If America’s hybrid Enlightenment underwrote the birth of liberal democracy in the United States, what underwrites it now?
What is going to underwrite liberal democracy in the 21st century? To me, it’s not obvious. That’s the big puzzle I’m working through right now. But it bears on this issue of culture wars, because if there’s nothing that we share in common—if there is no hybrid enlightenment that we share—then what are the sources we can draw upon to come together and find any kind of solidarity?
But a counterpoint to this brain-centric view of sleep has emerged. Researchers have noticed that molecules produced by muscles and some other tissues outside the nervous system can regulate sleep. Sleep affects metabolism pervasively in the body, suggesting that its influence is not exclusively neurological. And a body of work that’s been growing quietly but consistently for decades has shown that simple organisms with less and less brain spend significant time doing something that looks a lot like sleep. Sometimes their behavior has been pigeonholed as only “sleeplike,” but as more details are uncovered, it has become less and less clear why that distinction is necessary.
It appears that simple creatures — including, now, the brainless hydra — can sleep. And the intriguing implication of that finding is that sleep’s original role, buried billions of years back in life’s history, may have been very different from the standard human conception of it. If sleep does not require a brain, then it may be a profoundly broader phenomenon than we supposed.
Behavioral interventions for prevention and treatment are an important part of the fight against drug abuse and conditions such as HIV/AIDS and mental illness. Among the challenges faced by scientists is how and when to alter the course of treatment for participants in the intervention. Adaptive interventions (also known as “adaptive treatment strategies” or “dynamic treatment regimens”) change based on what is best for the patient at that time.
Just-in-time adaptive interventions (JITAIs) are a special type of adaptive intervention where—thanks to mobile technology like activity sensors and smartphones—an intervention can be delivered when and where it is needed.
Technically speaking, an adaptive intervention is a sequence of decision rules that specify how the intensity or type of treatment should change depending on the patient’s needs. Methodology Center researchers are developing data-analytic methods for constructing decision rules that allow researchers to build better JITAIs and adaptive interventions.
For a study in the journal eLife, a research team led by Aaron Blackwell of Washington State University and Adrian Jaeggi of University of Zurich tracked 13 different health variables across 40 Tsimane communities, analyzing them against each person’s wealth and the degree of inequality in each community. While some have theorized that inequality’s health impacts are universal, the researchers found only two robustly associated outcomes: higher blood pressure and respiratory disease.
“The connection between inequality and health is not as straightforward as what you would see in an industrialized population. We had a lot of mixed results,” said Blackwell, a WSU associate professor of anthropology. “These findings suggest that at this scale, inequality is not at the level that causes health problems. Instead maybe it’s the extreme inequality in a lot of modern environments that causes health problems since it’s unlike any inequality we’ve ever had in our evolutionary history.”
What do all these attacks add up to? The exact targets of CRT’s critics vary wildly, but it is obvious that most critics simply do not know what they are talking about. Instead, CRT functions for the right today primarily as an empty signifier for any talk of race and racism at all, a catch-all specter lumping together “multiculturalism,” “wokeism,” “anti-racism,” and “identity politics”—or indeed any suggestion that racial inequities in the United States are anything but fair outcomes, the result of choices made by equally positioned individuals in a free society. They are simply against any talk, discussion, mention, analysis, or intimation of race—except to say we shouldn’t talk about it.
“The Descent of Man” is one of the most influential books in the history of human evolutionary science. We can acknowledge Darwin for key insights but must push against his unfounded and harmful assertions. Reflecting on “Descent” today one can look to data demonstrating unequivocally that race is not a valid description of human biological variation, that there is no biological coherence to “male” and “female” brains or any simplicity in biological patterns related to gender and sex, and that “survival of the fittest” does not accurately represent the dynamics of evolutionary processes.
The scientific community can reject the legacy of bias and harm in the evolutionary sciences by recognizing, and acting on, the need for diverse voices and making inclusive practices central to evolutionary inquiry. In the end, learning from “Descent” illuminates the highest and most interesting problem for human evolutionary studies today: moving toward an evolutionary science of humans instead of “man.”
Results
Tests for selection using polygenic scores failed to find evidence of natural selection when the less biased within-family GWAS effect sizes were used. Tests for selection using Fst values did not find evidence of natural selection. Expected mean difference in IQ was substantially smaller than postulated by hereditarians, even under unrealistic assumptions that overestimate genetic contribution.
Conclusion
Given these results, hereditarian claims are not supported in the least. Cognitive performance does not appear to have been under diversifying selection in Europeans and Africans. In the absence of diversifying selection, the best case estimate for genetic contributions to group differences in cognitive performance is substantially smaller than hereditarians claim and is consistent with genetic differences contributing little to the Black–White gap.
As critical race theory becomes increasingly politicized and attacked by Republicans, CNN’s Jason Carroll explains what the concept is, and what it isn’t.
The Southern African concept of ubuntu offers a crucial lesson for the U.S.: By recognizing our interconnections and actively undoing systemic racism, we can all become more fully human.
Even before Hardin’s ‘The Tragedy of the Commons’ was published, however, the young political scientist Elinor Ostrom had proven him wrong. While Hardin speculated that the tragedy of the commons could be avoided only through total privatisation or total government control, Ostrom had witnessed groundwater users near her native Los Angeles hammer out a system for sharing their coveted resource. Over the next several decades, as a professor at Indiana University Bloomington, she studied collaborative management systems developed by cattle herders in Switzerland, forest dwellers in Japan, and irrigators in the Philippines. These communities had found ways of both preserving a shared resource – pasture, trees, water – and providing their members with a living. Some had been deftly avoiding the tragedy of the commons for centuries; Ostrom was simply one of the first scientists to pay close attention to their traditions, and analyse how and why they worked.
The features of successful systems, Ostrom and her colleagues found, include clear boundaries (the ‘community’ doing the managing must be well-defined); reliable monitoring of the shared resource; a reasonable balance of costs and benefits for participants; a predictable process for the fast and fair resolution of conflicts; an escalating series of punishments for cheaters; and good relationships between the community and other layers of authority, from household heads to international institutions.
A common mistake is to speak and think of ‘circular economy’ or ‘regenerative culture’ as a singular. Such thinking is informed by the profoundly un-ecological neoliberal economic doctrine of ‘scaling-up’ and ‘globalising’. To create human economic and industrial patterns that fit into the way life sustains ecosystems and planetary health we need to co-create diverse circular economies in service of diverse regenerative cultures. The underlying patterns and principles might be the same, yet the place-sourced expressions of these will be unique adaptations to the bio-cultural uniqueness of their bioregional context.
Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice.
Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’.
Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists.
The disease affects more than 260 million people around the world, but we barely understand it. We know that the balance between the prefrontal cortex (at the front of the brain) and the anterior cingulate cortex (tucked just behind it) plays some role in regulating mood, as does the chemical serotonin. But what actually causes depression? Is there a tiny but important area of the brain that researchers should focus on? And does there even exist a singular disorder called depression, or is the label a catch-all denoting a bunch of distinct disorders with similar symptoms but different brain mechanisms? “Fundamentally,” says Hill, “we don’t have a biological understanding of depression or any other mental illness.”
The problem, for Hill, requires an ambitious, participatory approach. If neuroscientists are to someday understand the biological mechanisms behind mental illness—that is, if they are to figure out what literally happens in the brain when a person is depressed, manic, or delusional—they will need to pool their resources. “There’s not going to be a single person who figures it all out,” he says. “There’s never going to be an Einstein who solves a set of equations and shouts, ‘I’ve got it!’ The brain is not that kind of beast.”
Prediction of future sensory input based on past sensory information is essential for organisms to effectively adapt their behavior in dynamic environments. Humans successfully predict future stimuli in various natural settings. Yet, it remains elusive how the brain achieves effective prediction despite enormous variations in sensory input rate, which directly affect how fast sensory information can accumulate. We presented participants with acoustic sequences capturing temporal statistical regularities prevalent in nature and investigated neural mechanisms underlying predictive computation using MEG.
By parametrically manipulating sequence presentation speed, we tested two hypotheses: neural prediction relies on integrating past sensory information over fixed time periods or fixed amounts of information. We demonstrate that across halved and doubled presentation speeds, predictive information in neural activity stems from integration over fixed amounts of information. Our findings reveal the neural mechanisms enabling humans to robustly predict dynamic stimuli in natural environments despite large sensory input rate variations.
Several biological and social contagion phenomena, such as superspreading events or social reinforcement, are the results of multi-body interactions, for which hypergraphs offer a natural mathematical description. In this paper, we develop a novel mathematical framework based on approximate master equations to study contagions on random hypergraphs with a heterogeneous structure, both in terms of group size (hyperedge cardinality) and of membership of nodes to groups (hyperdegree). The characterization of the inner dynamics of groups provides an accurate description of the contagion process, without losing the analytical tractability. Using a contagion model where multi-body interactions are mapped onto a nonlinear infection rate, our two main results show how large groups are influential, in the sense that they drive both the early spread of a contagion and its endemic state (i.e., its stationary state).
First, we provide a detailed characterization of the phase transition, which can be continuous or discontinuous with a bistable regime, and derive analytical expressions for the critical and tricritical points. We find that large values of the third moment of the membership distribution suppress the emergence of a discontinuous phase transition. Furthermore, the combination of heterogeneous group sizes and nonlinear contagion facilitates the onset of a mesoscopic localization phase, where contagion is sustained only by the largest groups, thereby inhibiting bistability as well. Second, we formulate a simple problem of optimal seeding for hypergraph contagions to compare two strategies: tuning the allocation of seeds according to either node individual properties or according to group properties. We find that, when the contagion is sufficiently nonlinear, groups are more effective seeds of contagion than individual nodes.
We present a review of frequency effects in memory, accompanied by a theory of memory, according to which the storage of new information in long-term memory (LTM) depletes a limited pool of working memory (WM) resources as an inverse function of item strength. We support the theory by showing that items with stronger representations in LTM (e.g., high frequency items) are easier to store, bind to context, and bind to one another; that WM resources are involved in storage and retrieval from LTM; that WM performance is better for stronger, more familiar stimuli.
We present a novel analysis of preceding item strength, in which we show from nine existing studies that memory for an item is higher if during study it was preceded by a stronger item (e.g., a high frequency word). This effect is cumulative (the more prior items are of high frequency, the better), continuous (memory proportional to word frequency of preceding item), interacts with current item strength (larger for weaker items), and interacts with lag (decreases as the lag between the current and prior study item increases). A computational model that implements the theory is presented, which accounts for these effects. We discuss related phenomena that the model/theory can explain.
Learning from direct experience is easy—we can always use trial and error—but how do we learn from nondirect (nonlocal) experiences? For this, we need additional mechanisms that bridge time and space. In rodents, hippocampal replay is hypothesized to promote this function. Liu et al. measured high-temporal-resolution brain signals using human magnetoencephalography combined with a new model-based, visually oriented, multipath reinforcement memory task. This task was designed to differentiate local versus nonlocal learning episodes within the subject. They found that reverse sequential replay in the human medial temporal lobe supports nonlocal reinforcement learning and is the underlying mechanism for solving complex credit assignment problems such as value learning.
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network representation involves often covert theoretical assumptions and methodological choices which affect the way networks are reconstructed from experimental data, and ultimately the resulting network properties and their interpretation. Here, we review some fundamental conceptual underpinnings and technical issues associated with brain network reconstruction, and discuss how their mutual influence concurs in clarifying the organization of brain function.
Cognition can be defined as computation over meaningful representations in the brain to produce adaptive behaviour. There are two views on the relationship between cognition and the brain that are largely implicit in the literature. The Sherringtonian view seeks to explain cognition as the result of operations on signals performed at nodes in a network and passed between them that are implemented by specific neurons and their connections in circuits in the brain.
The contrasting Hopfieldian view explains cognition as the result of transformations between or movement within representational spaces that are implemented by neural populations. Thus, the Hopfieldian view relegates details regarding the identity of and connections between specific neurons to the status of secondary explainers. Only the Hopfieldian approach has the representational and computational resources needed to develop novel neurofunctional objects that can serve as primary explainers of cognition.
Complex social play is well-documented across many animals. During play, animals often use signals that facilitate beneficial interactions and reduce potential costs, such as escalation to aggression. Although greater focus has been given to visual play signals, here we demonstrate that vocalisations constitute a widespread mode of play signalling across species.
Our review indicates that vocal play signals are usually inconspicuous, although loud vocalisations, which suggest a broadcast function, are present in humans and some other species. Spontaneous laughter in humans shares acoustic and functional characteristics with play vocalisations across many species, but most notably with other great apes. Play vocalisations in primates and other mammals often include sounds of panting, supporting the theory that human laughter evolved from an auditory cue of laboured breathing during play.
Human social complexity allowed laughter to evolve from a play-specific vocalisation into a sophisticated pragmatic signal that interacts with a large suite of other multimodal social behaviours in both intragroup and intergroup contexts. This review provides a foundation for detailed comparative analyses of play vocalisations across diverse taxa, which can shed light on the form and function of human laughter and, in turn, help us better understand the evolution of human social interaction.
Cognitive Processes Shaping Individual and Collective Belief Systems, a PhD defense in the Departments of Psychology and Neuroscience at Princeton University
Technology companies like Cisco, Microsoft, and SAP, for instance, found ways to gamify everything from learning social media skills, to verifying language translations, to boosting sales performance.
Today, thanks to science, we know a lot more about when gamification really works, and what its boundaries seem to be. Beyond the gamified apps and software we use to learn new skills, companies like Amazon and Uber now deploy it to boost worker productivity. But to get the results we seek, in our own lives and in the workplace, it’s important to understand when gamification will work—and when it will only make matters worse…
My colleagues argue that their study highlights a common mistake companies make with gamification: Gamification is unhelpful and can even be harmful if people feel that their employer is forcing them to participate in “mandatory fun.” Another issue is that if a game is a dud, it doesn’t do anyone any good. Gamification can be a miraculous way to boost engagement with monotonous tasks at work and beyond, or an over-hyped strategy doomed to fail. What matters most is how the people playing the game feel about it…
At its best, gamification seems to work when it helps people achieve the goals they want to reach anyway by making the process of goal achievement more exciting. When people fully buy into a game, the results can be impressive, durably improving volunteers’ productivity, boosting worker morale, and even, as seen in one recent study, robustly helping families increase their step counts. But gamification can tank when players don’t buy in. If a game is mandatory and designed to encourage people to do something they don’t particularly care to do (like achieving an outstanding record of attendance at school), or if it feels manipulative, it can backfire.
This painting draws together images from neuroscience (neural connections in the brain) and Buddhism (the lotus), to express the blissful aura of the well-meditated brain. Recent scientific research indicates that the practice of meditation produces physical changes in parts of the brain associated with memory, sense of self, empathy and stress. Although it’s highly unlikely that meditating on the truth in the lotus will actually cause your neurons to look like this, it’s a fun idea to express artistically. In soft but vivid shades of blue, from cerulean to indigo.
The scientist prodded his Burmese interview subject, whose name was Ma Tin Aung Myo, for details about the Japanese soldier she described dying near that spot many years earlier, years before Ma Tin was born. She proceeded to tell the scientist from America facts about the dead man’s life she shouldn’t have known. Her claim was outrageous and dangerous, and yet, as she unfolded the dead man’s story, she was unequivocal: She was that soldier, the reincarnation of a man cut down in his prime by enemy bullets.
Professor Ian Stevenson leaned into his questions, pressing her, daring her. He needed a breakthrough, with his credibility and standing at his university on the brink. His life’s work might not recover otherwise. Ma Tin Aung Myo, the young woman with the short haircut and baggy clothes answered his questions gamely. Then her demeanor changed. Looking the scientist square in the eyes, she issued a shocking request in Burmese.
The paper was inspired in part by a 2017 viral Twitter hashtag, #CatSquares, in which users posted pictures of their cats sitting inside squares marked out on the floor with tape—kind of a virtual box. The following year, lead author Gabriella Smith, a graduate student at Hunter College (CUNY) in New York City, attended a lecture by co-author Sarah-Elizabeth Byosiere, who heads the Thinking Dog Center at Hunter. Byosiere studies canine behavior and cognition, and she spoke about dogs’ susceptibility to visual illusions. While playing with her roommate’s cat later that evening, Smith recalled the Twitter hashtag and wondered if she could find a visual illusion that looked like a square to test on cats.
Smith found it in the work of the late Italian psychologist and artist Gaetano Kanizsa, who was interested in illusory (subjective) contours that visually evoke the sense of an edge in the brain even if there isn’t really a line or edge there. The Kanizsa square consists of four objects shaped like Pac-Man, oriented with the “mouth” facing inward to form the four corners of a square. Even better, there was a 1988 study that used the Kanizsa square to investigate the susceptibility of two young female cats to illusory contours. The study concluded that, yes, cats are susceptible to the Kanizsa square illusion, suggesting that they perceive subjective contours much like humans.
We are now in a time of readily available brain imaging data. Not only are researchers now sharing data more than ever before, but additionally large-scale data collecting initiatives are underway with the vision that many future researchers will use the data for secondary analyses. Here I provide an overview of available datasets and some example use cases. Example use cases include examining individual differences, more robust findings, reproducibility–both in public input data and availability as a replication sample, and methods development. I further discuss a variety of considerations associated with using existing data and the opportunities associated with large datasets. Suggestions for further readings on general neuroimaging and topic-specific discussions are also provided.
Coffee won’t cure cancer, but it may help to prevent it and possibly other diseases as well. Part of answering the question of coffee’s connection to cancer lies in asking another: what is cancer? At its simplest, cancer is uncontrolled cell growth, which is fundamentally about regulating when genes are, or are not, actively expressed.
My research group studies gene regulation and I can tell you that even a good cup of coffee, or boost of caffeine, won’t cause genes that are turned off or on at the wrong time to suddenly start playing by the rules.
The antioxidants in coffee may actually have a cancer-fighting effect. Remember that antioxidants fight cellular damage. One type of damage that they may help reduce is mutations to DNA, and cancer is caused by mutations that lead to the misregulation of genes.
One Breath Around the World is the latest aquatic spectacle from the French freediving champion Guillaume Néry, and his partner, the French freediver, underwater filmmaker and dancer Julie Gautier. Without the aid of supplied air, Néry plunges into the ocean’s hidden depths, revealing remarkable views of marine geology and wildlife around the globe. Seamlessly transitioning between a range of underwater realms, the video gives the impression that Néry’s journey is taken in a single breath. With stunning camerawork by Gautier, who also held her breath while filming, the duo prove themselves expert explorers of not only water, but space and perspective as well, making these grand underwater landscapes appear almost alien.
Their most significant finding is one group of bacteria present in both modern humans and Neanderthals is specifically adapted to consume starch. This suggests starchy foods such as roots, tubers and seeds became important in the human diet long before the introduction of farming. Some researchers believe the transition to eating these starchy foods, which are rich in energy, may have enabled humans to grow the large brains that characterize our species.
“Understanding the role that food played in the evolutionary development of human uniqueness is complicated because many types of food remains — especially plants — are poorly preserved in the fossil record,” said John Yellen, director of the National Science Foundation’s archaeology and archaeometry program, which supported the research. “This innovative study of ancient bacteria preserved in fossil plaque provides a new and powerful way to understand the evolution of humans and our social and ecological history.”
Your brain becomes much more active during exercise, “perhaps more active than at any other time,” says Maddock. One way neurons communicate is with electrical pulses, and sometimes entire networks of neurons fire in unison, like a group of soccer fans chanting together at a game. These synchronized pulses are known colloquially as brain waves. Different kinds of brain waves, characterized by the number of times they oscillate in a single second, are linked to one’s mental state and mood. Lower-frequency waves occur when we’re running on autopilot: brushing our teeth, driving, or sleeping, for example. Higher-frequency waves, known as beta waves, occur when we’re awake and mentally engaged and are associated with attention, memory, and information processing.
Using tools like an electroencephalogram (EEG), which pick up on these electrical pulses, researchers have found that aerobic exercise causes a shift in the amplitude and frequency of brain waves. More beta waves, in other words, means that exercisers may be in a more alert state. “The brain is in a different gear when the human being is in motion,” Maddock says.