Skip Over Navigation Links

Clearinghouse for Training Modules to Enhance Data Reproducibility

In January 2014, NIH launched a series of initiatives to enhance rigor and reproducibility in research. As a part of this initiative, NIGMS, along with nine other NIH institutes and centers, issued the funding opportunity announcement RFA-GM-15-006 to develop, pilot and disseminate training modules to enhance data reproducibility. Graduate students, postdoctoral fellows and early stage investigators are the primary audiences for these training modules.

For the benefit of the scientific community, we will be posting the products of these grants on this website as they become available. In addition, we are sharing here training modules developed by NIH.

NIH Rigor and Reproducibility Training Modules

These modules, developed by NIH, focus on integral aspects of rigor and reproducibility in the research endeavor, such as bias, blinding and exclusion criteria. The modules are not meant to be comprehensive, but rather are intended as a foundation to build on and a way to stimulate conversations, which may be facilitated by the use of the accompanying discussion materials. Currently, the modules are being integrated into NIH intramural training activities.

Introduction to the Modules [PDF, 110KB]

Module 1: Lack of Transparency
Lack of Transparency Discussion Material [PDF, 97.2KB]

Module 2: Blinding and Randomization
Blinding and Randomization Discussion Material [PDF, 104KB]

Module 3: Biological and Technical Replicates
Biological and Technical Replicates Discussion Material [PDF, 98.7KB]

Module 4: Sample Size, Outliers, and Exclusion Criteria
Sample Size, Outliers, and Exclusion Criteria Discussion Material [PDF, 107KB]

NIH Office of Disease Prevention (ODP) Course on Pragmatic and Group-Randomized Trials in Public Health and Medicine

This 7-part online course aims to help researchers design and analyze group-randomized trials (GRTs). It includes video presentations, slide sets, suggested reading materials, and guided activities. The course is presented by ODP’s Director, Dr. David M. Murray.

Access the course and related materials.

Improving Reproducibility in Research

Aaron Carroll, Indiana University School of Medicine, R25 GM116146

In order to promote better training and ensure the reliability and reproducibility of research, we developed a series of webisodes (thematically related online videos) targeted at graduate students, postdoctoral fellows, and beginning investigators that will address critical features of experimental design and analysis/reporting.

Module 3: Biological and Technical Replicates

Module 1: Experimental Design Learning Module Link to External website
The Experimental Design Learning Module focuses on the intricacies of designing research that is robust, with an eye towards making it reproducible. It is comprised of four distinct learning units: 1) Replication, 2) Randomization, 3) Pitfalls with Experimental Design, and 4) Measurement. Each of these learning units has one or more sub-topics that is the subject of an individual webisode.

Module 2: Analysis/Reporting Learning Module Link to External website
The Analysis/Reporting Learning Module covers the various factors that are critical to writing about research with enough clarity to ensure its reproducibility. Very few researchers are given formal education on how to properly report findings to support reproducibility. It is comprised of three distinct learning units: 1) Power and P-values, 2) Scientific Writing, 3) The Review Process. Each of these units has one or more sub-topics that are the subject of an individual webisode.

Society for Neuroscience Rigor and Reproducibility Training Webinars

Manny DiCiccio-Bloom, Rutgers University; Cheryl Sisk, Michigan State University, R25 DA041326

Promoting Awareness and Knowledge to Enhance Scientific Rigor in Neuroscience Link to External website

Webinar 1: Improving Experimental Rigor and Enhancing Data Reproducibility in Neuroscience Link to External website
Post-Webinar Discussion Questions Link to External website
The topics of scientific rigor and data reproducibility have been increasingly covered in the scientific and mainstream media, and they are being addressed by publishers, professional organizations and funding agencies. This webinar addresses topics of scientific rigor as they pertain to preclinical neuroscience research.


Webinar 2: Minimizing Bias in Experimental Design and Execution Link to External website
Post-Webinar Discussion Questions Link to External website
Investigations into the lack of reproducibility in preclinical research often identify unintended biases in experimental planning and execution. This webinar covers random sampling, blinding and balancing experiments to avoid sources of bias.


Webinar 3: Best Practices in Post-Experimental Data Analysis Link to External website
Post-Webinar Discussion Questions Link to External website
Proper data handling standards, including appropriate use of statistical tests, are integral to rigorous and reproducible neuroscience research. Training in quantitative neuroscience is a specific area of emphasis for the BRAIN Initiative, and rigorous statistical analysis methods are included in the recent Proposed Principals and Guidelines for Reporting Preclinical Research [PDF, 69KB]. This webinar covers best practices in post-experimental data analysis.


Webinar 4: Best Practices in Data Management and Reporting Link to External website
Post-Webinar Discussion Questions Link to External website
Efforts to enhance scientific rigor, reproducibility and robustness critically depend on archiving and retrieving experimental records, protocols, primary data and subsequent analyses. In this webinar, presenters discuss best practices and challenges for data management and reporting, particularly when dealing with information security and sensitive material; archiving and disclosure of pre- and post-hoc data analytics; and data management on multidisciplinary teams that include collaborators around the globe.


Webinar 5: Statistical Applications in Neuroscience Link to External website
Post-Webinar Discussion Questions Link to External website
How can neuroscientists improve their “statistical thinking” and make full and effective use of their data? This webinar covers common applications of statistics in neuroscience, including the types of research questions statistics are best positioned to address, modeling paradigms and exploratory data analysis. The presenters also share examples and case studies from their research.


Webinar 6: Experimental Design to Minimize Systemic Biases: Lessons from Rodent Behavioral Assays and Electrophysiology Studies Link to External website
Post-Webinar Discussion Questions (no longer available)
Common sources of bias in animal behavior and electrophysiology experiments can be minimized or avoided by following best practices of unbiased experimental design and data analysis and interpretation. In this webinar, presenters discuss experimental design and hypothesis testing for mouse behavioral assays, as well as sampling, interpretational bias and referencing in in vitro and in vivo electrophysiology recording studies.


Workshop 7: Tackling Challenges in Scientific Rigor: The (Sometimes) Messy Reality of Science Link to External website
This webinar explores practical examples of the challenges and solutions in conducting rigorous science from neuroscientists at various career stages. It focuses on development of the interpersonal, scientific and technical skills needed to address various issues in scientific rigor, such as what to do when you can't replicate a published result, how to get support from a mentor and how to cope with various career pressures that might affect the quality of your science.

This page last reviewed on October 18, 2017