“The Descent of Man” is one of the most influential books in the history of human evolutionary science. We can acknowledge Darwin for key insights but must push against his unfounded and harmful assertions. Reflecting on “Descent” today one can look to data demonstrating unequivocally that race is not a valid description of human biological variation, that there is no biological coherence to “male” and “female” brains or any simplicity in biological patterns related to gender and sex, and that “survival of the fittest” does not accurately represent the dynamics of evolutionary processes.
The scientific community can reject the legacy of bias and harm in the evolutionary sciences by recognizing, and acting on, the need for diverse voices and making inclusive practices central to evolutionary inquiry. In the end, learning from “Descent” illuminates the highest and most interesting problem for human evolutionary studies today: moving toward an evolutionary science of humans instead of “man.”
Tests for selection using polygenic scores failed to find evidence of natural selection when the less biased within-family GWAS effect sizes were used. Tests for selection using Fst values did not find evidence of natural selection. Expected mean difference in IQ was substantially smaller than postulated by hereditarians, even under unrealistic assumptions that overestimate genetic contribution.
Given these results, hereditarian claims are not supported in the least. Cognitive performance does not appear to have been under diversifying selection in Europeans and Africans. In the absence of diversifying selection, the best case estimate for genetic contributions to group differences in cognitive performance is substantially smaller than hereditarians claim and is consistent with genetic differences contributing little to the Black–White gap.
As critical race theory becomes increasingly politicized and attacked by Republicans, CNN’s Jason Carroll explains what the concept is, and what it isn’t.
The Southern African concept of ubuntu offers a crucial lesson for the U.S.: By recognizing our interconnections and actively undoing systemic racism, we can all become more fully human.
Even before Hardin’s ‘The Tragedy of the Commons’ was published, however, the young political scientist Elinor Ostrom had proven him wrong. While Hardin speculated that the tragedy of the commons could be avoided only through total privatisation or total government control, Ostrom had witnessed groundwater users near her native Los Angeles hammer out a system for sharing their coveted resource. Over the next several decades, as a professor at Indiana University Bloomington, she studied collaborative management systems developed by cattle herders in Switzerland, forest dwellers in Japan, and irrigators in the Philippines. These communities had found ways of both preserving a shared resource – pasture, trees, water – and providing their members with a living. Some had been deftly avoiding the tragedy of the commons for centuries; Ostrom was simply one of the first scientists to pay close attention to their traditions, and analyse how and why they worked.
The features of successful systems, Ostrom and her colleagues found, include clear boundaries (the ‘community’ doing the managing must be well-defined); reliable monitoring of the shared resource; a reasonable balance of costs and benefits for participants; a predictable process for the fast and fair resolution of conflicts; an escalating series of punishments for cheaters; and good relationships between the community and other layers of authority, from household heads to international institutions.
A common mistake is to speak and think of ‘circular economy’ or ‘regenerative culture’ as a singular. Such thinking is informed by the profoundly un-ecological neoliberal economic doctrine of ‘scaling-up’ and ‘globalising’. To create human economic and industrial patterns that fit into the way life sustains ecosystems and planetary health we need to co-create diverse circular economies in service of diverse regenerative cultures. The underlying patterns and principles might be the same, yet the place-sourced expressions of these will be unique adaptations to the bio-cultural uniqueness of their bioregional context.
Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice.
Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’.
Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists.
The disease affects more than 260 million people around the world, but we barely understand it. We know that the balance between the prefrontal cortex (at the front of the brain) and the anterior cingulate cortex (tucked just behind it) plays some role in regulating mood, as does the chemical serotonin. But what actually causes depression? Is there a tiny but important area of the brain that researchers should focus on? And does there even exist a singular disorder called depression, or is the label a catch-all denoting a bunch of distinct disorders with similar symptoms but different brain mechanisms? “Fundamentally,” says Hill, “we don’t have a biological understanding of depression or any other mental illness.”
The problem, for Hill, requires an ambitious, participatory approach. If neuroscientists are to someday understand the biological mechanisms behind mental illness—that is, if they are to figure out what literally happens in the brain when a person is depressed, manic, or delusional—they will need to pool their resources. “There’s not going to be a single person who figures it all out,” he says. “There’s never going to be an Einstein who solves a set of equations and shouts, ‘I’ve got it!’ The brain is not that kind of beast.”
Prediction of future sensory input based on past sensory information is essential for organisms to effectively adapt their behavior in dynamic environments. Humans successfully predict future stimuli in various natural settings. Yet, it remains elusive how the brain achieves effective prediction despite enormous variations in sensory input rate, which directly affect how fast sensory information can accumulate. We presented participants with acoustic sequences capturing temporal statistical regularities prevalent in nature and investigated neural mechanisms underlying predictive computation using MEG.
By parametrically manipulating sequence presentation speed, we tested two hypotheses: neural prediction relies on integrating past sensory information over fixed time periods or fixed amounts of information. We demonstrate that across halved and doubled presentation speeds, predictive information in neural activity stems from integration over fixed amounts of information. Our findings reveal the neural mechanisms enabling humans to robustly predict dynamic stimuli in natural environments despite large sensory input rate variations.
Several biological and social contagion phenomena, such as superspreading events or social reinforcement, are the results of multi-body interactions, for which hypergraphs offer a natural mathematical description. In this paper, we develop a novel mathematical framework based on approximate master equations to study contagions on random hypergraphs with a heterogeneous structure, both in terms of group size (hyperedge cardinality) and of membership of nodes to groups (hyperdegree). The characterization of the inner dynamics of groups provides an accurate description of the contagion process, without losing the analytical tractability. Using a contagion model where multi-body interactions are mapped onto a nonlinear infection rate, our two main results show how large groups are influential, in the sense that they drive both the early spread of a contagion and its endemic state (i.e., its stationary state).
First, we provide a detailed characterization of the phase transition, which can be continuous or discontinuous with a bistable regime, and derive analytical expressions for the critical and tricritical points. We find that large values of the third moment of the membership distribution suppress the emergence of a discontinuous phase transition. Furthermore, the combination of heterogeneous group sizes and nonlinear contagion facilitates the onset of a mesoscopic localization phase, where contagion is sustained only by the largest groups, thereby inhibiting bistability as well. Second, we formulate a simple problem of optimal seeding for hypergraph contagions to compare two strategies: tuning the allocation of seeds according to either node individual properties or according to group properties. We find that, when the contagion is sufficiently nonlinear, groups are more effective seeds of contagion than individual nodes.
We present a review of frequency effects in memory, accompanied by a theory of memory, according to which the storage of new information in long-term memory (LTM) depletes a limited pool of working memory (WM) resources as an inverse function of item strength. We support the theory by showing that items with stronger representations in LTM (e.g., high frequency items) are easier to store, bind to context, and bind to one another; that WM resources are involved in storage and retrieval from LTM; that WM performance is better for stronger, more familiar stimuli.
We present a novel analysis of preceding item strength, in which we show from nine existing studies that memory for an item is higher if during study it was preceded by a stronger item (e.g., a high frequency word). This effect is cumulative (the more prior items are of high frequency, the better), continuous (memory proportional to word frequency of preceding item), interacts with current item strength (larger for weaker items), and interacts with lag (decreases as the lag between the current and prior study item increases). A computational model that implements the theory is presented, which accounts for these effects. We discuss related phenomena that the model/theory can explain.
Learning from direct experience is easy—we can always use trial and error—but how do we learn from nondirect (nonlocal) experiences? For this, we need additional mechanisms that bridge time and space. In rodents, hippocampal replay is hypothesized to promote this function. Liu et al. measured high-temporal-resolution brain signals using human magnetoencephalography combined with a new model-based, visually oriented, multipath reinforcement memory task. This task was designed to differentiate local versus nonlocal learning episodes within the subject. They found that reverse sequential replay in the human medial temporal lobe supports nonlocal reinforcement learning and is the underlying mechanism for solving complex credit assignment problems such as value learning.
Graph theory is now becoming a standard tool in system-level neuroscience. However, endowing observed brain anatomy and dynamics with a complex network representation involves often covert theoretical assumptions and methodological choices which affect the way networks are reconstructed from experimental data, and ultimately the resulting network properties and their interpretation. Here, we review some fundamental conceptual underpinnings and technical issues associated with brain network reconstruction, and discuss how their mutual influence concurs in clarifying the organization of brain function.
Cognition can be defined as computation over meaningful representations in the brain to produce adaptive behaviour. There are two views on the relationship between cognition and the brain that are largely implicit in the literature. The Sherringtonian view seeks to explain cognition as the result of operations on signals performed at nodes in a network and passed between them that are implemented by specific neurons and their connections in circuits in the brain.
The contrasting Hopfieldian view explains cognition as the result of transformations between or movement within representational spaces that are implemented by neural populations. Thus, the Hopfieldian view relegates details regarding the identity of and connections between specific neurons to the status of secondary explainers. Only the Hopfieldian approach has the representational and computational resources needed to develop novel neurofunctional objects that can serve as primary explainers of cognition.