Consider the Lego structure depicted in Figure 1, in which a figurine is placed under a roof supported by a single pillar at one corner. How would you change this structure so that you could put a masonry brick on top of it without crushing the figurine, bearing in mind that each block added costs 10 cents? If you are like most participants in a study reported by Adams et al.1 in Nature, you would add pillars to better support the roof. But a simpler (and cheaper) solution would be to remove the existing pillar, and let the roof simply rest on the base. Across a series of similar experiments, the authors observe that people consistently consider changes that add components over those that subtract them — a tendency that has broad implications for everyday decision-making…
Adams et al. demonstrated that the reason their participants offered so few subtractive solutions is not because they didn’t recognize the value of those solutions, but because they failed to consider them. Indeed, when instructions explicitly mentioned the possibility of subtractive solutions, or when participants had more opportunity to think or practise, the likelihood of offering subtractive solutions increased. It thus seems that people are prone to apply a ‘what can we add here?’ heuristic (a default strategy to simplify and speed up decision-making). This heuristic can be overcome by exerting extra cognitive effort to consider other, less-intuitive solutions…
These perceived disadvantages of subtractive solutions might encourage people to routinely seek out additive ones. This is consistent with Adams and colleagues’ suggestion that frequent previous exposure to additive solutions has made them more cognitively accessible, and thus more likely to be considered. However, in addition, we posit that previous experience could lead people to assume that they are actually expected to add rather than subtract. As a result, the study’s participants might be generalizing from past experiences and instinctively assume that they should add features, only revisiting this assumption after further reflection or explicit prompting. Similarly, members of a university community might implicitly assume that the incoming president wants them to formulate new initiatives, not criticize existing ones.
After his observations of brain imprints preserved in fossil cranial specimens from Olduvai (Tanzania) (2), paleoanthropologist Phillip V. Tobias stated that “hominid evolution attained a new level of organization…with the emergence of the genus Homo.” There have since been debates on whether humanlike brain organization emerged concomitantly with the appearance of the genus Homo. On page 165 of this issue, Ponce de León et al. (3) challenge this view by suggesting that Homo in Dmanisi (foothills of the Georgian Caucasus) 1.85 to 1.77 million years (Ma) ago showed a primitive organization of the brain.
His 1972 book, Stone Age Economics, established him as one of American anthropology’s most significant theorists; he argued that hunter-gatherers were not some primitive undeveloped representation of human potential but were in fact the original affluent society. Sahlins challenged anthropologists who used western economic models to study nonmarket economies, eventually insisting that materialism was nothing but a form of idealism. In the years that followed he wrote books cutting through the codes of history, culture, kinship, and mythos, frequently revealing culture at the core of what was otherwise was conceived in some other way; all this presented with frequent surges of brilliance.
Marshall Sahlins greatest political contribution grew from his anti-war activism during the Vietnam War. Concerned about the lies being propagated by the US government, in 1965 he traveled on his own to Vietnam and used his ethnographic sensibilities to see what he could learn firsthand. His trip cumulated in his seminal political essay “The Destruction of Conscience in Vietnam.” His political opposition to the war and academic critiques led him to establish the first anti-war teach-in, held on the University of Michigan campus in March 1965. In the months that followed, hundreds of similar teach-ins on college campuses sprung up across the US, and this rapidly became an important tool for mobilizing American campuses against the war.
Recently, a possible unifying account of cognition (and perhaps emotion) has begun to emerge within computational neuroscience. According to the so-called predictive processing framework the brain is constantly attempting to minimize the discrepancy between its sensory expectations and its actual incoming sensory signals. This framework offers an architecture in which distinct functions can be explained at their different time-scales by the same computational principles, and where distinct theories can find a common language, which brings fruitful modelling advantages. As such it is quickly becoming an attractive way of carrying out theoretical and experimental research in cognitive science.
More recently this framework has most recently begun to serve as the architectural basis for an exciting and very promising new account of feelings, emotions and moods. This workshop will bring together philosophers and cognitive scientists working on predictive processing and emotion in order to promote an interdisciplinary dialogue about the nature of emotion in predictive system like us.
You can access all the videos from the Emotion and Prediction workshop here.
Watch, for example, Emotion as Interoceptive Active Inference:
Almost 400 years ago, with the dictum ‘I think, therefore I am,’ René Descartes claimed that cognition was the foundation of the human condition. Today, prediction has taken its place. As the cognitive scientist Anil Seth put it: ‘I predict (myself) therefore I am.’
Somehow, the logic we find animating our bodies is the same one transforming our body politic. The prediction engine – the conceptual tool used by today’s leading brain scientists to understand the deepest essence of our humanity – is also the one wielded by today’s most powerful corporations and governments. How did this happen and what does it mean? …
The strength of this association between predictive economics and brain sciences matters, because – if we aren’t careful – it can encourage us to reduce our fellow humans to mere pieces of machinery. Our brains were never computer processors, as useful as it might have been to imagine them that way every now and then. Nor are they literally prediction engines now and, should it come to pass, they will not be quantum computers. Our bodies aren’t empires that shuttle around sentrymen, nor are they corporations that need to make good on their investments. We aren’t fundamentally consumers to be tricked, enemies to be tracked, or subjects to be predicted and controlled. Whether the arena be scientific research or corporate intelligence, it becomes all too easy for us to slip into adversarial and exploitative framings of the human; as Galison wrote, ‘the associations of cybernetics (and the cyborg) with weapons, oppositional tactics, and the black-box conception of human nature do not so simply melt away.’
How we see ourselves matters. As the feminist scholar Donna Haraway explained, science and technology are ‘human achievements in interaction with the world. But the construction of a natural economy according to capitalist relations, and its appropriation for purposes of reproducing domination, is deep.’ Human beings aren’t pieces of technology, no matter how sophisticated. But by talking about ourselves as such, we acquiesce to the corporations and governments that decide to treat us this way.
It must surely be more stimulating to the reader’s senses if, instead of writing “He made a hurried meal off the Plat du Jour—excellent cottage pie and vegetables, followed by home-made trifle” (I think this is a fair English menu without burlesque) you write “Being instinctively mistrustful of all Plats du Jour, he ordered four fried eggs cooked on both sides, hot buttered toast and a large cup of black coffee.” No difference in price here, but the following points should be noted: firstly, we all prefer breakfast foods to the sort of food one usually gets at luncheon and dinner; secondly, this is an independent character who knows what he wants and gets it; thirdly, four fried eggs has the sound of a real man’s meal and, in our imagination, a large cup of black coffee sits well on our taste buds after the rich, buttery sound of the fried eggs and the hot buttered toast.
Watching those wiggly lines march across the EEG screen gave me the germ of a different idea, something that didn’t boil down to pure neuronal computation or information-processing. Every time a neuron fires, along with the matter-based signal that travels down its wire-like nerve fibre, it also projects a tiny electromagnetic (EM) pulse into the surrounding space, rather like the signal from your phone when you send a text. So when my son heard the door close, as well as triggering the firing of billions of nerves, its slamming would have projected billions of tiny pulses of electromagnetic energy into his brain. These pulses flow into each other to generate a kind of pool of EM energy that’s called an electromagnetic field – something that neurobiologists have neglected when probing the nature of consciousness…
The unity of EM fields is apparent whenever you use wifi. Perhaps you’re streaming a radio documentary about Katumuwa’s stele on your phone while another family member is watching a movie, and another is listening to streamed music. Remarkably, all this information, whether movies, pictures, messages or music, is instantly available to be downloaded from any point in the vicinity of your router. This is because – unlike the information encoded in discrete units of matter such as computer gates or neurons – EM field information is encoded as immaterial waves that travel at the speed of light from their source to their receiver. Between source and receiver, all those waves encoding different messages overlap and intermingle to become a single EM field of physically bound information with as much unity as a single photon or electron, and which can be downloaded from any point in the field. The field, and everything encoded in it, is everywhere. While watching my son’s EEG marching across the screen, I wondered what it was like to be his brain’s EM field pulsing with physically bound information correlating with all of his sense perceptions. I guessed it would feel a lot like him.
Q seems to have disappeared for now. Whoever they are, they’ve largely abandoned their base after riling them up for years — and have not posted new “Q drops” since 2020. But the people who believe in Q are wrestling with whether to keep on believing or to abandon a cause that, for some, became core to their identities. Some might be deprogramming themselves, while others are cherry-picking the parts of the movement they want to hold on to. But the people I spoke to say their feelings have changed drastically from when they were following the inauguration to when Biden’s stimulus checks were being sent out. Vanderbilt has been using the weeks after QAnon’s disintegration to read more, learn more, talk to more people, and question absolutely everything she’s ever known. “It’s kind of a little bit of a do-over,” she said after the inauguration. “I’m going to learn the world again.”
But there was something that the mainstream media, in its hubris, failed to notice about David Icke: a growing number of people were feeling more aligned to him than to his tormentors. These were people who also, for their own reasons, felt ridiculed and shut out of the culture. And so when Icke re-emerged with his paedophile lizard theory he immediately began selling out concert halls across the world. It was an incredibly surprising and, I suspect, spiteful story born from injury: conspiracy theory as grievance storytelling. And it was a dangerous theory, with its appeals to paranoia and delusion.
When sceptics are asked to explain why people succumb to conspiracy theories, they tend to say they offer a strange comfort – they allow people to make sense of a chaotic world. But I think there’s another, more often ignored reason. You get renaissances of conspiracy theories when the powerful behave in conspiratorial ways. The mystery is why the theorists are never happy with the actual evidence, and instead behave like amateur sleuths inside some magical parallel world where metaphors are facts. In that world, the deaths at David Koresh’s church in Waco were caused not by government overreach but by the Illuminati’s Satanic desire for blood sacrifice. Why they invariably slap a layer of fiction on top of an already fascinating truth had long been a puzzle to me, and to many others, too: a question I’ve been asked over and over is whether I think Alex Jones knows he’s lying when he tells his millions of listeners that, for instance, the Sandy Hook school shootings were “a giant hoax”…
At first, I felt sad for him, wondering if he was embarrassed that a thing like that had come out in court. But I kept thinking about it and, honestly, it answers a lot of questions. High-scoring narcissists are prone to paranoia and black-and-white thinking. Through their eyes everyone is either wonderful or else they’re the enemy. (Often the wonderful person commits some minor transgression and instantly becomes the enemy; if you’ve been close to a narcissist you’ll probably recognise that “love-bomb, devalue, discard” relationship arc.) And narcissists need to feel like they’re the smartest person in the room – hence, I suspect, their reaching for conspiracy theories with their obnoxiously counterintuitive, superficially complex worldviews.
With David Icke and Alex Jones the movement had found its stars. So now all it needed was a better distribution system. Unfortunately the one it got turned out to massively exacerbate our proclivity for paranoia and black-and-white thinking – social media algorithms.
Check out this online poster – it comes across as a combination of powerpoint, Prezi, and conference poster, optimized for presentation at an online event. And a fun topic too!
How We Record Audio At The Tiny Desk
Accessible overview of how the recordings at Tiny Desk turn out amazing, with a focus on how the music gets captured through the effective use of mics.
“I think walking is probably the single most underutilized tool in health and wellness,” says nutrition coach and personal trainer Jeremy Fernandes. According to Fernandes, the reason we rarely hear about walking as a major fitness tool—in the same conversations as stuff like yoga or expensive spinning bikes—is that people aren’t emotionally prepared for fitness to be easy. “Most people want to believe that working out and fat loss needs to be hard. If you need impossibly crushing workouts to get in better shape, then you’re not responsible when you fail,” he says. “But a basic program performed consistently—even a half-assed effort done consistently—can bring you a really long way, much further than going hardcore once in a while.”