3 Approaches to Fix the Neuroscience Reproducibility Crisis:
- Run trial studies with the intention to re-run the experiment.
- Ensure larger sample sets.
- Diversify your sample by using online volunteer communities.
There is a well-described reproducibility crisis in modern scientific research, especially cognitive neuroscience. That is, due to the cost(s) and logistics of obtaining sample subjects or using high-tech equipment like an fMRI, cognitive neuroscience labs infrequently reproduce results of experimental trials.
Reproducibility, in this post, is defined as an internal laboratory process in which experimental design moves from pilot/exploratory, experimental trials to larger, more controlled experimental trials that will run multiple times. In properly designed experiments, the data collected and results from each trial should be able to:
- Merge into a larger data set.
- Have a larger sample size
- Have high statistical power.
These factors set up neuroscience experiments to be worthy of published research reports so that other labs may attempt to replicate findings. However, unfortunately, this is rarely happening for several reasons.
This reproducibility crisis threatens the future of science because it wastes time, effort, resources and expertise on studies that will not be replicated. Without replication, “landmark” studies fade into legend and seldom affect the world in the ways promised. An analytical report by the Hematology and Oncology Research vice-president estimated that the lack of reproducibility costs up to 28 billion dollars each year in the United States alone. These funds are ultimately wasted on preclinical work that fails to meet the standard of reliable reproducibility.
To amend this pattern, we must first understand it. We can start by recognizing neuroscience’s current practices and culture. This post will discuss experimental design and trial scheduling to provide several ways your laboratory can improve experimental reproducibility, both within the lab (experimental trials) and outside of the lab (open science data sharing). By the end, you will know how some resources can reduce the costs and logistics of obtaining sample data and test subjects.
Run it Again!
It is possible to achieve confidence in your study’s reproducibility.
Usually, most experimental designs start with a pilot or exploratory study with a smaller sample size, depending on the nature of the experiment. In cognitive neuroscience, that initial sample size is usually n < 20. Their purpose is to ensure the experimental effect you are seeing is a true potential effect and to justify the use of additional resources that a larger experiment takes. These studies are also important to test and refine their protocol.
Generally, cognitive neuroscience investigators are trying to balance statistical power with the smallest cost sample size that can be reasonably achieved at the pilot stage. The goal of these studies is to provide provisional statistical evidence for the presence of an effect. These studies are not intended for research reports or publications but rather to validate a line of inquiry, substantiate grant funding applications and serve as the baseboard for future studies.
After pilot or exploratory studies are completed in cognitive neuroscience experiments, the next step is to “run it again!” With various parameters in focus, rerunning the investigation is a way to overcome the reproducibility crisis in cognitive neuroscience experimental data. Low statistical power can typically be overcome with a larger sample size. To determine an appropriate sample size, the Journal of Neuroscience made these recommendations in 2020. Much like many concepts and principles in cognitive neuroscience, it depends on your experiments’ context and intended goal.
Again, these initial studies are not intended to be published research reports. However, sometimes it seems that does happen, and that leads to a reproducibility crisis.
Larger Sample Sets & Open Science
Gain confidence in the effect you’re seeing is a true effect with practical significance.
Moving beyond pilot to experimental trials intended for publication is typically the next step in a neuroscience laboratory’s research process.
Thanks to the pilot study, the researchers are confident in their methodology and appear to have a true effect.
At this stage, they:
- Design the experiment with a larger sample set.
- Need to collect more data.
- Run the experimental trial again.
If statistical tests are significant and non-conflicting, neuroscientists need to gather even more evidence to be confident they are seeing a true effect.
Open Science
In research publications, there is a growing movement referred to as “open science,” where the data and analysis scripts are published alongside the research narrative. In cognitive neuroscience, one of the best resources for open neuroscience research is the Neuroscience Information Framework (NIF; neuinfo.org).
The Open Science theory encompasses all stages of the research cycle, in which there would be enhanced transparency of experimental data collection, processing, storage, and review. The philosophy of transparency in open science also includes experimental designs – sharing the details with the scientific community to improve the replication and reproducibility of core neuroscience experiments. Such methods encourage scientists to continue producing high-caliber research and combat the growing replication crisis in cognitive neuroscience.
Groups like EmotivLABs are cultivating cognitive neuroscience research communities that allow them to share experimental designs with other researchers.
As the reproducibility crisis continues to impact the scientific community, the need for high-quality, reproducible research studies has never been higher. Several design options exist to allow other scientists to reproduce experiments. These options give neuroscience researchers the opportunity to:
- Determine their appropriate sample size.
- Use innovative, validated, reliable tools for data analysis.
- Consult with peers and scientific leadership.
- Quickly use the guiding principles of open science.
Diverse Remote Volunteer Communities
A small sample size negatively impacts the validity and reproducibility of research because the results generated from several participants cannot be generalized to the rest of the population. These results do not encompass the neurodiversity of society. As such, remote data collection tools are the future of inclusive neuroscience.
EmotivLABs: Cultivating a Cognitive Neuroscience Community
Meet EMOTIV
Founded in 2011, EMOTIV is a San Francisco-based bioinformatics company with a mission of advancing our understanding of the human brain using custom electroencephalography (EEG) hardware, analysis, and visualization.
At the center of open science is collaboration. Emotiv’s platform and staff aim to promote scientific integrity and experimental rigor. Our scalable research platform, EmotivLABs, connects cognitive neuroscientists worldwide with a global population of research participants and investigators. Recognizing the additive linearity of neuroscience research, we aid researchers by providing extensive, multi-dimensional, rich datasets, allowing you to draw meaningful conclusions from a wide sample.
Grow Your Research’s Sample Size
EmotivLABs combats the reproducibility crisis by connecting investigators to vetted, qualified research participants and alleviating the logistical burden of recruitment while ensuring sufficient statistical power.
Ensure your work is securely stored, transformable and recoverable
Another threat to reproducibility is analytic replication which requires the original dataset to validate results. Finding and financing a safe storage location is tedious. Acting as a data repository, EmotivLABs securely stores your data after participants upload their recordings. That data is automatically uploaded and rigorously encrypted at all stages of transit and storage.
Streamlining Processes
To help investigators design cognitive neuroscience experiments and publish research studies that contribute to the larger scientific community, we developed a robust, streamlined toolkit, EmotivPro, to generate innovative, reproducible experiments.
Our explicit instructions and transparent methods act as a guide allowing investigators to verify the original findings. The intuitive and easy-to-use Experiment Builder will enable other researchers who would like to reproduce your study by providing a pre-made template within the platform. Alternatively, these cognitive neuroscience researchers can create a unique experiment, allowing them to tailor each detail from scratch.
EMOTIV Technology
EMOTIV has designed a suite of tools to support every step of neuroscience research along the way.
EmotivPRO software allows users to process, analyze and visualize trial results. Researchers can also design experiments at the professional level in which any participant with an EMOTIV headset can participate if compliant with the experimental design.
A Software Development Kit (SDK) for EMOTIV is also available so that custom apps, interactions, or experimental designs can be performed on the go using the headset and smartphone alone.
As the number of disciplines and commercial markets embracing neuroscience tools and methodologies increases, EMOTIV’s low-cost, ease-of-use EEG system(s) are being used in:
- Neuroscience research
- Health and wellness marketing initiatives
- Automotive industries
- Neuromarketing
- Consumer research
- Education
- Entertainment settings
Additionally, with the quality, cost, and ability to ship EMOTIV headsets worldwide, researchers can recruit and enroll individuals who qualify. Due to the quality control metrics the processing software evaluates, researchers can also trust the data collection process.
Want to Learn More About What the EmotivLABs Platform Could Do for Your Research?
EmotivLABS enables you to build your experiment, deploy your experiment safely and securely, recruit from a global panel of verified participants, and collect high quality EEG data, all from one platform. Click here to learn more or request a demo.