Science Inventory

Can interpreting sediment toxicity tests a mega sites benefit from novel approaches to normalization to address batching of tests?

Citation:

Dillon, F., S. Roark, K. O'Neal, J. Steevens, J. Sinclair, AND Dave Mount. Can interpreting sediment toxicity tests a mega sites benefit from novel approaches to normalization to address batching of tests? Sediment Management Work Group Fall Sponsor Forum, Charleston, SC, September 27 - 28, 2017.

Impact/Purpose:

Particularly for large contaminated sediment sediment sites, extensive sediment toxicity testing programs are pursued to characterize the extent of sediment toxicity to benthic organisms, and to develop exposure-response relationships that can be used to identify risk benchmarks and inform the baseline ecological risk assessment and remedial design. For logistical reasons, it is often impractical to test the toxicity of large numbers of sediment samples in simultaneous exposures, requiring that toxicity testing take place in batches, with a subset of the overall sample set tested within each batch. For a variety of reasons, it is generally inappropriate to combine absolute response data from tests not conducted concurrently; instead some form of response normalization among batches is required. While normalization to an independent control sample is often used, this has potential pitfalls and could create artifacts. This presentation examines the issues surrounding appropriate normalization of data from batched sediment toxicity tests, and explores some alternative means of batch normalization based on analysis of data sets from past sampling programs. Emphasis is on improving the interpretation of the data, not on conclusions regarding potential risk at specific sites.

Description:

Sediment toxicity tests are a key tool used in Ecological Risk Assessments for contaminated sediment sites. Interpreting test results and defining toxicity is often a challenge. This is particularly true at mega sites where the testing regime is large, and by necessity performed in batches of concurrent tests. This reality introduces variability and uncertainty that add to the challenge of interpreting results. To better interpret exposure-response relationships that can then guide remedial decisions, a means to account for variability introduced by batched data collection is important. Several sources of variability can be expected to influence the absolute performance of test organisms, to vary among discrete batches of sediment toxicity tests, particularly for sublethal endpoints (weight/growth, biomass, reproduction). These sources include the starting size/age/health of the organisms used to begin the test, conditions within each test (e.g., food amount/quality, temperature), and random variation. For analyses that incorporate data from multiple batches of simultaneously-conducted sediment exposures, the presumed goal is to parse differences in organism performance between a) batch-related variation and b) contamination-induced changes, such that the overall exposure-response gradient can be evaluated independent of batch-related variation. This is often done by normalizing data to a control treatment, though this approach is not without potential for creating artifacts in the analysis. Reference sediments are sometimes used to provide information on what performance might be expected in site sediments lacking meaningful contamination, under the concept that sediments from geographically or geochemically similar locations lacking site influence might better represent the non-contaminant factors that could cause response to clean site sediments to vary from a laboratory control. While this is a conceptually reasonable goal, it is often hard to implement unambiguously because we don’t necessarily understand quantitatively the factors that influence absolute organism response, or how well the selected reference locations match those factors as they exist in site sediments. This presentation explores alternatives of normalizing test results in large scale testing programs that include batches of concurrent toxicity tests, and illustrates how it influences test interpretation relative to a both control and reference sediment. Multiple approaches to test normalization will be presented and illustrated using data from mega sites (e.g., Portland Harbor). The implications of appropriate test normalization on remedial decision making will be discussed.

Record Details:

Record Type:DOCUMENT( PRESENTATION/ SLIDE)
Product Published Date:10/28/2017
Record Last Revised:10/02/2017
OMB Category:Other
Record ID: 337781