There’s so much hype about education products and services, many of these claiming to have scientific research to back them up. Perhaps the best example of this is the “brain training” industry, which presents itself as scientific fact. Not so fast, said a group of psychology and neuroscience professors this week, as they issued a statement clarifying that there really is little evidence to substantiate claims about “neuroplasticity” made by the makers of “brain based games.” This raises lots of questions about how school leaders can wade through the marketing spin and assess what’s science and what’s PR.
One of the most read articles we’ve published here on Educating Modern Learners was one of our first: “Should You Build a Brain-Based School?” by Randolph-Macon College psychology professor Cedar Riener. Putting a damper on some of the wild claims about “brain-based schooling,” Riener’s main argument: “Can neuroscience improve educational practice? The answer to this is a qualified yes, but far less than most people think.”
You’d think, based on all the headlines and advertising promising “brain training,” that the answer to Riener’s question would in fact be a resounding yes. “Brain training” is big business. There are any number of products and services and guides available that say they can help you improve your memory and retention and boost your “neuroplasticity.” Lumos Labs, the company behind Lumosity, has raised over $67 million in venture capital, for example, and according to one industry analyst, the market for braining training is expected to reach $6 billion by 2020. And increasingly, these products, along with their theories about how the brain works and their claims about “scientific research,” are creeping into the classroom.
That’s why a statement released this week by the Stanford Center for Longevity and the Max Planck Institute for Human Development, signed by over 70 psychology and neuroscience professors, is so important. (“A Consensus on the Brain Training Industry from the Scientific Community” is available here.) In it, they write, “To date, there is little evidence that playing brain games improves underlying broad cognitive abilities, or that it enables one to better navigate a complex realm of everyday life.”
In summary: We object to the claim that brain games offer consumers a scientifically grounded avenue to reduce or reverse cognitive decline when there is no compelling scientific evidence to date that they do. The promise of a magic bullet detracts from the best evidence to date, which is that cognitive health in old age reflects the long-term effects of healthy, engaged lifestyles. In the judgment of the signatories, exaggerated and misleading claims exploit the anxiety of older adults about impending cognitive decline. We encourage continued careful research and validation in this field.
According to The Chronicle of Higher Education, the professors’ concern isn’t simply that research claims are distorted in order to sell “brain-based” products; it’s that some of their peers have financial relationships with these companies — financial stakes, for example, or paid research gigs. “‘There’s a conflict of interest there,’ said Randall W. Engle, a psychology professor at the Georgia Institute of Technology who has conducted several studies debunking brain-training claims. ‘It gives me the concern, when I read their papers: Is this the consultant for Lumosity talking, or is this the objective scientist talking?’”
Evaluating Education Research
Of course, “the promise of a magic bullet” isn’t something made just by the brain-training industry. Nor is this the first or only example of a cherry-picking of scientific research in order to sell a product or service or promote a particular political agenda. In fact, all this is (sadly) pretty par for the course in education (although in fairness, education is hardly alone here.)
As education technology increases in adoption, it’s likely that we’re poised to see more of this too, as companies rely on the association between “technology” and “science.”
So, what are some of the things we should look for when we hear scientific research touted?
First, look to see if the research is peer reviewed. White papers are typically self-published for marketing purposes. That’s quite different than findings that are published in scholarly journals. These often (not always) go through rigorous peer view by other experts in the field. But even then, it’s worth examining that research closely. Who are the researchers involved? Are they academics? Are they industry researchers? Is there any financial conflict of interest?
Take a look at the research design. “Controlled experiments” are often tough to do in education, particularly in “real world” situations like the classroom. (Students are rarely “randomly assigned” to teachers, for example.) Look at the sample size. Are the findings “statistically significant”? Are they generalizable to the population at large? Examine what’s actually being measured. Do the research findings match what others have found or the larger body of research on the topic? Is the research replicable?
Third, read the journal article — not just the summary, and not just journalists’ interpretation of the results. Remember the research that gets published is often that which has some positive effect. “We didn’t really find anything” — a null result — doesn’t make for much of a journal article. And it certainly doesn’t make good headlines or marketing copy.
Finally, be skeptical when you hear the word “proof.” When, for example, brain training companies tout “proven results,” it’s likely that’s a phrase written by the marketing department, not by a scientist.
Image credits: dierk schaefer
Latest posts by Will Richardson (see all)
- Modern Learners Podcast #32 – Words Matter, and an End of the Year Wrap Up - December 21, 2017
- The Importance of Common Language - December 19, 2017
- Modern Learners Podcast #31 – Cold Shower Keynotes - December 13, 2017