The study of cancer patients “led us to consider whether or not this treatment might be effective for people in the general depression community,” Davis says.
In the new study, patients received two doses of psilocybin on different days and also received about 11 hours of psychotherapy. The drug was administered in a supervised yet homey setting designed to put participants at ease, Davis says.
“They have a blindfold on, they have headphones on, listening to music,” he says. “And we really encourage them to go inward and to kind of experience whatever is going to come up with the psilocybin.”
Half the participants began treatment immediately. The rest were put on a waitlist so they could serve as a comparison group until their own treatment began eight weeks later.
“There was a significant reduction in depression in the immediate-treatment group compared to those in the waitlist,” Davis says. And patients responded much faster than with typical antidepressants.
“The world’s most-loved social storytelling platform”
Japan has become seen as a much-admired and emulated exemplar of these active, “understanding-centered” teaching methods. But what’s often missing from the discussion is the rest of the story: Japan is also home of the Kumon method of teaching mathematics, which emphasizes memorization, repetition, and rote learning hand-in-hand with developing the child’s mastery over the material. This intense afterschool program, and others like it, is embraced by millions of parents in Japan and around the world who supplement their child’s participatory education with plenty of practice, repetition, and yes, intelligently designed rote learning, to allow them to gain hard-won fluency with the material…
The problem with focusing relentlessly on understanding is that math and science students can often grasp essentials of an important idea, but this understanding can quickly slip away without consolidation through practice and repetition. Worse, students often believe they understand something when, in fact, they don’t…
This approach—which focused on fluency instead of simple understanding—put me at the top of the class. And I didn’t realize it then, but this approach to learning language had given me an intuitive understanding of a fundamental core of learning and the development of expertise—chunking.
Simply tuning Republicans into MSNBC, or Democrats into Fox News, might only amplify conflict. What can we do to make people open their minds?
The trick, as strange as it may sound, is to make people believe the opposite opinion was their own to begin with.
The experiment relies on a phenomenon known as choice blindness. Choice blindness was discovered in 2005 by a team of Swedish researchers. They presented participants with two photos of faces and asked participants to choose the photo they thought was more attractive, and then handed participants that photo. Using a clever trick inspired by stage magic, when participants received the photo it had been switched to the person not chosen by the participant—the less attractive photo. Remarkably, most participants accepted this card as their own choice and then proceeded to give arguments for why they had chosen that face in the first place. This revealed a striking mismatch between our choices and our ability to rationalize outcomes. This same finding has since been replicated in various domains including taste for jam, financial decisions, and eye-witness testimony.
You think deep learning will be enough to replicate all of human intelligence. What makes you so sure?
I do believe deep learning is going to be able to do everything, but I do think there’s going to have to be quite a few conceptual breakthroughs. For example, in 2017 Ashish Vaswani et al. introduced transformers, which derive really good vectors representing word meanings. It was a conceptual breakthrough. It’s now used in almost all the very best natural-language processing. We’re going to need a bunch more breakthroughs like that.
And if we have those breakthroughs, will we be able to approximate all human intelligence through deep learning?
Yes. Particularly breakthroughs to do with how you get big vectors of neural activity to implement things like reason. But we also need a massive increase in scale. The human brain has about 100 trillion parameters, or synapses. What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain. GPT-3 can now generate pretty plausible-looking text, and it’s still tiny compared to the brain.
Findings showed Helicobacter, responsible for many intestinal diseases including ulcers, and Gallibacterium, with many hemolytic species found in birds including poultry, were generally more abundant in birds that performed poorly.
“While we did not identify beneficial taxa responsible for differences among performance categories, we suggest Helicobacter and Gallibacterium may signal microbiome imbalance or maladaptation in poor-performance birds,” said Rindy C. Anderson, Ph.D., senior author, an assistant professor of biological sciences in FAU’s Charles E. Schmidt College of Science, and a member of FAU’s Brain Institute. “This finding raises the question: ‘Do specific taxa influence cognitive performance? Or, is a songbird’s gut microbiome simply indicative of host quality and thus correlated with cognitive ability?’ Research could address these questions by describing the functionality of the core microbiome members for more bird species and testing how specific pre- and probiotic treatments affect cognitive ability.”