Sharpen your intuitions about plausibility of observed effect sizes.
r > .60? Is that effect plausibly as large as the relationship between gender and height (.67) or nearness to the equator and temperature (.60)?
Excellent thread about established effect sizes for things that many of us are familiar with. Here’s a much lower level of effect size:
r > .10? Is that effect plausibly as large as the relationship between antihistamine and runny nose (.11), childhood lead exposure and IQ (.12), anti-inflammatories and pain reduction (.14), self-disclosure and likability (.14), or nicotine patch and smoking abstinence (.18)?
The original article with the data is Psychological testing and psychological assessment: A review of evidence and issues
Leveraging the Adolescent Brain Cognitive Development (ABCD) Study3 (N=11,878), we estimated the effect sizes and reproducibility of these brain-wide associations studies (BWAS) as a function of sample size. The very largest, replicable brain-wide associations for univariate and multivariate methods were r=0.14 and r=0.34, respectively.
In smaller samples, typical for brain-wide association studies (BWAS), irreproducible, inflated effect sizes were ubiquitous, no matter the method (univariate, multivariate). Until sample sizes started to approach consortium-levels, BWAS were underpowered and statistical errors assured. Multiple factors contribute to replication failures4–6; here, we show that the pairing of small brain-behavioral phenotype effect sizes with sampling variability is a key element in wide-spread BWAS replication failure.
Brain-behavioral phenotype associations stabilize and become more reproducible with sample sizes of N⪆2,000. While investigator-initiated brain-behavior research continues to generate hypotheses and propel innovation, large consortia are needed to usher in a new era of reproducible human brain-wide association studies.
Lennart Nacke, director of the HCI Games Group at the University of Waterloo, says speedrunning could have a similar effect on players like Fowler. “At the heart of every video game is a learning super engine,” says Nacke. “He’s optimizing learning. He’s becoming an athlete in that way.”
The speedrunner’s lexicon sounds like another language to the uninitiated. They “clip” through walls by hitting exactly the right screen pixel at the perfect speed. They pray that “RNG” (shorthand for random number generators that inject unpredictability into enemy movements) doesn’t interrupt their chosen path: unlucky RNG ruins flawless runs. The goal is to minimize obstacles. “For game designers, it’s all about adding friction and challenges,” says Nacke. “Speedrunners eliminate all the friction.”
“Most speedrunners are perfectly synchronized,” Nacke adds. “The cognitive system is automated. It’s become so ingrained in his motor cortex that now he’s doing that motor function to achieve the optimal time.” In other words, Fowler’s hands move too quickly for him to explain his movements in real time. But he’s typically in total control.
Twenty-nine teams involving 61 analysts used the same data set to address the same research question: whether soccer referees are more likely to give red cards to dark-skin-toned players than to light-skin-toned players. Analytic approaches varied widely across the teams, and the estimated effect sizes ranged from 0.89 to 2.93 (Mdn = 1.31) in odds-ratio units. Twenty teams (69%) found a statistically significant positive effect, and 9 teams (31%) did not observe a significant relationship.
Overall, the 29 different analyses used 21 unique combinations of covariates. Neither analysts’ prior beliefs about the effect of interest nor their level of expertise readily explained the variation in the outcomes of the analyses. Peer ratings of the quality of the analyses also did not account for the variability. These findings suggest that significant variation in the results of analyses of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy in which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective, analytic choices influence research results.
In 2018, Silberzahn, Uhlmann, Nosek, and colleagues published an article in which 29 teams analyzed the same research question with the same data: Are soccer referees more likely to give red cards to players with dark skin tone than light skin tone? The results obtained by the teams differed extensively. Many concluded from this widely noted exercise that the social sciences are not rigorous enough to provide definitive answers. In this article, we investigate why results diverged so much.
We argue that the main reason was an unclear research question: Teams differed in their interpretation of the research question and therefore used diverse research designs and model specifications. We show by reanalyzing the data that with a clear research question, a precise definition of the parameter of interest, and theory-guided causal reasoning, results vary only within a narrow range. The broad conclusion of our reanalysis is that social science research needs to be more precise in its “estimands” to become credible.
Why this fear of panic? What would have been wrong with allowing the public to feel afraid? Contrary to Lightfoot’s reassurances, there was a reason for the citizens of Chicago—and the rest of us—to “be fearful.” Yet leaders on both sides of the Pacific, at both the local and national levels, among both the politicians and the opinion-makers, were determined to keep their people as far away from fear as possible.
Events proved the anxieties of these elites unfounded: when cities in China, Europe, and finally the United States descended into lockdown, there was no mass panic. There was fear, yes, plenty of it—but that fear did not lead to irrational, hysterical, or violent group behavior. Our fear did not lead to looting, pogroms, or unrest. The fearful of Wuhan did not rise up in rebellion against the Communist Party; even when Italian doctors began rationing medical equipment and supplies, the fearful of Milan did not loot stores or disrupt the medical system; the fearful of New York did not duel each other to the death over toilet paper rolls.
The social “panic” that disturbed mayors, presidents, columnists, and Communists never materialized. It never does. Time and resources that could have been devoted to combatting a very real pandemic were wasted combatting an imaginary social phenomenon. In 2020, we all learned the perils of the myth of panic.
The power of the nervous system lies in this ability to learn, even through adulthood. Networks of neurons discover new relationships through the timing of electrochemical impulses called spikes, which neurons use to communicate with one another. This temporal pattern strengthens or weakens connections between cells, constituting the physical substrate of a memory. Most of the time, the upshot is beneficial. The ability to associate causes with effects—encroaching shadows with dive-bombing falcons, cacti with hidden water sources—gives organisms a leg up on predators and competitors.
But sometimes neurons are too good at their jobs. The brain, with its extraordinary computational prowess, can learn language and logic. It can also learn how to be sick.
People who experience a single random seizure, for instance, are 50 times more likely to become epileptic than someone who has never had one.1 Like Philip’s raven, the same stimuli that preceded the first fit—such as anxiety or a particular musical passage—more readily trigger future episodes. And the more often seizures occur, the stronger and more pervasive the underlying neural network may become, potentially inducing more widespread or more violent attacks.
One last thing that stunned me from your book: You write about the metabolic cost of pregnancy — comparing pregnant women to Tour de France riders.
You can push the body as in the Tour de France, where riders burn 7,000 or 8,000 calories a day for three weeks. But it also makes sense that pregnancy is pushing the same metabolic limits as something like the Tour de France. They both run your body’s metabolic machinery at full blast for as long as it can keep it up. It just speaks to how taxing pregnancy is, for one thing, but it also speaks to how these things are all connected. Our energetic machinery gets co-opted into these different tasks and makes connections that unite all of these different experiences.
Scientists have also looked at the existence of inherited trauma in groups such as the children of Holocaust survivors, Native American communities and the sons of Civil War prisoners of war, to name a few. And though the findings seem to support the idea that trauma did, in fact, lead to changes in future generations, critics have noted small sample sizes, exaggeration of causality and media sensationalism as reasons to doubt them.
Marlin, who conducts her research on mice, supports making sure the “science is rigorous” and acknowledges issues with data from others in the past. However, she said that “if I take a step back from being a scientist and am just a fellow human in society, we see inherited trauma playing out in many instances across the world; it makes sense. Now we need to identify the biology behind this inheritance, which will help us better understand and navigate the stresses of our world today.”
“Marriage rituals!” Marcus exploded, hoarse from exhaustion. “What the hell is the point of that?” His question masked a bigger one: Why would anyone go to a mountainous country that seemed weird to Westerners and immerse herself in an alien culture to study it? I understood his reaction. As I later admitted in my doctoral thesis: “With people dying outside on the streets of Dushanbe, studying marriage rituals did sound exotic—if not irrelevant.”
Anthro-Vision: A New Way to See in Business and Life has a simple aim: to answer Marcus’ question—and show that the ideas emanating from a discipline that many people think (wrongly) studies only the “exotic” are vital for the modern world. The reason is that anthropology is an intellectual framework that enables you to see around corners, spot what is hidden in plain sight, and gain empathy for others and fresh insight on problems. This framework is needed more than ever now as we grapple with climate change, pandemics, racism, social media run amok, artificial intelligence, financial turmoil, and political conflict.
Imagine waking up in the middle of the night to your phone buzzing. Your fingertips, feeling around in the dark, somehow recognize the device on your nightstand, distinguishing it from other objects by touch alone.
To explore how we accomplish sensory feats like this, neuroscientists at Columbia’s Zuckerman Institute studied, using a novel quantitative approach, how mice use their whiskers to feel the shapes of things. The researchers discovered that the brain reconfigures itself dramatically when identifying objects by touch.
This surprising mental agility, described July 21 in Neuron, challenges the traditional view that brain cells have fairly fixed roles in controlling the body.
But you’ve also written that analogy is “an understudied area in AI.” If it’s so fundamental, why is that the case?
One reason people haven’t studied it as much is because they haven’t recognized its essential importance to cognition. Focusing on logic and programming in the rules for behavior — that’s the way early AI worked. More recently people have focused on learning from lots and lots of examples, and then assuming that you’ll be able to do induction to things you haven’t seen before using just the statistics of what you’ve already learned. They hoped the abilities to generalize and abstract would kind of come out of the statistics, but it hasn’t worked as well as people had hoped.
You can show a deep neural network millions of pictures of bridges, for example, and it can probably recognize a new picture of a bridge over a river or something. But it can never abstract the notion of “bridge” to, say, our concept of bridging the gender gap. These networks, it turns out, don’t learn how to abstract. There’s something missing. And people are only sort of grappling now with that.
States serious about reducing overdose deaths should devote most of their funds to harm reduction and evidence-based treatment. Harm reduction strategies – those predicated on meeting people where they are and encouraging positive change – have proved effective inreducing overdose deaths. These approaches include syringe provision programs, naloxone distribution programs and supervised consumption services.