Science Inventory

Practical examples of modeling choices and their consequences for risk assessment

Citation:

Davis, Allen AND Jeff Gift. Practical examples of modeling choices and their consequences for risk assessment. Society of Toxicology Annual Meeting, Baltimore, MD, March 10 - 14, 2019.

Impact/Purpose:

This presentation at a continuing education course will highlight differences in modeling guidance between major regulatory agencies in the US and Europe with a goal of informing participants how these differences will affect their dose-response assessments and conclusions.

Description:

Although benchmark dose (BMD) modeling has become the preferred approach to identifying a point of departure (POD) over the No Observed Adverse Effect Level, there remain challenges to its application in human health risk assessment. BMD modeling, as currently implemented by the Environmental Protection Agency’s (EPA) Benchmark Dose Software and the Netherlands National Institute for Public Health and the Environment’ (RIVM) PROAST software, involves fitting multiple parametric models to animal toxicity data in order to calculate the BMD and its 95% lower confidence level. There is some consistency between the models used in each software platform, but modeling guidance often diverges between EPA and RIVM (and the European Food Safety Authority, EFSA). This presentation will focus on highlighting areas of divergence in modeling guidance and the effect that they have on POD determination. RIVM/EFSA have recently recommended a frequentist approach for model averaging for quantal endpoints, while EPA is currently implementing an alternative approach that utilizes Bayesian methods. While model averaging appears promising for quantal endpoints, the differences between frequentist and Bayesian approaches could potentially lead to differences in POD identification. Of particular focus for this presentation will be the difference in guidance for modeling continuous endpoints, especially choices surrounding the assumption of the distribution of the data (normal vs. lognormal), the benchmark response (BMR) chosen (percent change vs change based on control standard deviation), default vs. biologically-based BMRs, and the model selection framework used to pick the “best” model or model results. Other areas where different modeling methods can affect POD identification will also be covered (modeling clustered data, multi-tumor and time-to-tumor data). Disclaimer: the views expressed in this abstract are those of the authors and do not necessarily reflect the views or policies of the U.S. EPA.

Record Details:

Record Type:DOCUMENT( PRESENTATION/ ABSTRACT)
Product Published Date:05/11/2018
Record Last Revised:05/11/2018
OMB Category:Other
Record ID: 340688