Science Inventory

A novel study evaluation strategy in the systematic review of animal toxicology studies for human health assessments of environmental chemicals.

Citation:

Dishaw, L., E. Yost, X. Arzuaga, A. Luke, A. Kraft, T. Walker, AND K. Thayer. A novel study evaluation strategy in the systematic review of animal toxicology studies for human health assessments of environmental chemicals. ENVIRONMENT INTERNATIONAL. Elsevier B.V., Amsterdam, Netherlands, 141:105736, (2020). https://doi.org/10.1016/j.envint.2020.105736

Impact/Purpose:

This manuscript describes a method developed to evaluate the quality of animal toxicology studies used for rEPA's Integrated Risk Information System (IRIS) Program assessments. Study evaluation is an important step of the systematic review process that is used to evaluate the strengths and weaknesses of a study. The manuscript provides an overview of the methods as well as specific examples of how it was applied within the context of the assessment of two phthalates, diisobutyl phthalate (DIBP) and diethyl phthalate (DEP).

Description:

A key aspect of the systematic review process is study evaluation to understand the strengths and weaknesses of individual studies included in the review. The present manuscript describes the process currently being used by the Environmental Protection Agency’s (EPA) Integrated Risk Information System (IRIS) Program to evaluate animal toxicity studies, illustrated by application to the recent systematic reviews of two phthalates: diisobutyl phthalate (DIBP) and diethyl phthalate (DEP). The IRIS Program uses a domain-based approach that was developed after careful consideration of tools used by others to evaluate experimental animal studies in toxicology and pre-clinical research. Standard practice is to have studies evaluated by at least two independent reviewers for aspects related to reporting quality, risk of bias/internal validity (e.g., randomization, blinding at outcome assessment, methods used to expose animals and assess outcomes, etc.), and sensitivity to identify factors that may limit the ability of a study to detect a true effect. To promote consistency across raters, prompting considerations and example responses are provided to reviewers, and a pilot phase is conducted. The evaluation process is performed separately for each outcome reported in a study, as the utility of a study may vary for different outcomes. Input from subject matter experts is used to identify chemical- and outcome-specific considerations (e.g., lifestage of exposure and outcome assessment when considering reproductive effects) to guide judgments within particular evaluation domains. For each evaluation domain, reviewers reach a consensus on a rating of Good, Adequate, Deficient, or Critically Deficient. These individual domain ratings are then used to determine the overall confidence in the study (High Confidence, Medium Confidence, Low Confidence, or Deficient). Study evaluation results, including the justifications for reviewer judgements, are documented and made publicly available in EPA’s version of Health Assessment Workspace Collaborative (HAWC), a free and open source web-based software application.

Record Details:

Record Type:DOCUMENT( JOURNAL/ PEER REVIEWED JOURNAL)
Product Published Date:05/17/2020
Record Last Revised:08/16/2021
OMB Category:Other
Record ID: 352594