Science Inventory

On Tracer Breakthrough Curve Dataset Size, Shape, and Statistical Distribution

Citation:

Field, M. On Tracer Breakthrough Curve Dataset Size, Shape, and Statistical Distribution. ADVANCES IN WATER RESOURCES. Elsevier Science Ltd, New York, NY, 141:1-19, (2020). https://doi.org/10.1016/j.advwatres.2020.103596

Impact/Purpose:

The purpose of this paper is to develop an in-depth investigation of tracer breakthrough curves in terms of size, shape, and statistical distribution. The analysis begins with a general explanation of the sampling process as it applies to a tracing study. An example breakthrough curve dataset obtained from an instantaneous (mathematical impulse function) tracer release consisting of a very large and densely-packed number of data points is used to illustrate the value of frequent sampling so as to not miss important breakthrough curve features then follows. Reducing the sampling frequency and smoothing the tracer \ac{btc} further emphasizes important aspects in the breakthrough curve. Lastly, it was found that tracer breakthrough curves have only rarely been evaluated from a statistical perspective so a detailed statistical assessment of the example breakthrough curve was undertaken with the objective of showing that application of appropriate statistical distributions can assist in understanding the shape and nature of a measured tracer breakthrough curve.

Description:

A tracer breakthrough curve (BTC) for each sampling station is the ultimate goal of every quantitative hydro- logic tracing study, and dataset size can critically affect the BTC. Groundwater-tracing data obtained using in situ automatic sampling or detection devices may result in very high-density data sets. Data-dense tracer BTCs obtained using in situ devices and stored in dataloggers can result in visually cluttered overlapping data points. The relatively large amounts of data detected by high-frequency settings available on in situ devices and stored in dataloggers ensure that important tracer BTC features, such as data peaks, are not missed. Alternatively, such dense datasets can also be difficult to interpret. Even more difficult, is the application of such dense data sets in solute-transport models that may not be able to adequately reproduce tracer BTC shapes due to the overwhelming mass of data. One solution to the difficulties associated with analyzing, interpreting, and modeling dense data sets is the selective removal of blocks of the data from the total dataset. Although it is possible to arrange to skip blocks of tracer BTC data in a periodic sense (data decimation) so as to lessen the size and density of the dataset, skipping or deleting blocks of data also may result in missing the important features that the high-frequency detection setting efforts were intended to detect. Rather than removing, reducing, or reformulating data overlap, signal filtering and smoothing may be utilized but smoothing errors (e.g., averaging errors, outliers, and potential time shifts) need to be considered. Appropriate probability distributions to tracer BTCs may be used to describe typical tracer BTC shapes, which usually include long tails. Recognizing appropriate probability distributions applicable to tracer BTCs can help in understanding some aspects of the tracer migration.

Record Details:

Record Type:DOCUMENT( JOURNAL/ PEER REVIEWED JOURNAL)
Product Published Date:05/21/2020
Record Last Revised:06/26/2020
OMB Category:Other
Record ID: 349235