Science Inventory

The Effect of Noise on the Predictive Limit of QSAR Models

Citation:

Kolmar, S. AND C. Grulke. The Effect of Noise on the Predictive Limit of QSAR Models. Journal of Cheminformatics. Springer, New York, NY, 13:92, (2021). https://doi.org/10.1186/s13321-021-00571-7

Impact/Purpose:

One of the key challenges in Quantitative Structure Activity Relationship (QSAR) modeling is evaluating the predictive performance of models, and evaluation methodology has been the subject of many studies in the past several decades. Evaluation of predictive performance has critical implications for the fields of drug discovery, toxicological risk assessment, and environmental regulation, among others. The importance of model evaluation and comparison is reflected in the fourth principle from the Organization for Economic Cooperation and Development (OECD), which states that a QSAR model must have “appropriate measures of goodness of fit, robustness, and predictivity.” While best practice guidelines have often emphasized the need for external validation on compounds that have been rigorously excluded from the training set, implicit assumptions about error in the training and validation data, and how these assumptions might affect performance evaluation, tend to be overlooked. It is necessary to examine these assumptions and their effects in order to appropriately evaluate the predictivity of QSAR models and utilize their predictions with confidence.

Description:

A key challenge in the field of Quantitative Structure Activity Relationships (QSAR) is how to effectively treat experimental error in the training and evaluation of computational models. It is often assumed in the field of QSAR that models cannot produce predictions which are more accurate than their training data. Additionally, it is implicitly assumed, by necessity, that data points in test sets or validation sets do not contain error, and that each data point is a population mean. This work proposes the hypothesis that QSAR models can make predictions which are more accurate than their training data and that the error-free test set assumption leads to a significant misevaluation of model performance. This work used 8 datasets with six different common QSAR endpoints, because different endpoints should have different amounts of experimental error associated with varying complexity of the measurements. Up to 15 levels of simulated Gaussian distributed random error was added to the datasets, and models were built on the error laden datasets using five different algorithms. The models were trained on the error laden data, evaluated on error-laden test sets, and evaluated on error-free test sets. The results show that for each level of added error, the RMSE for evaluation on the error free test sets was always better. The results support the hypothesis that, at least under the conditions of Gaussian distributed random error, QSAR models can make predictions which are more accurate than their training data, and that the evaluation of models on error laden test and validation sets may give a flawed measure of model performance. These results have implications for how QSAR models are evaluated, especially for disciplines where experimental error is very large, such as in computational toxicology.

Record Details:

Record Type:DOCUMENT( JOURNAL/ PEER REVIEWED JOURNAL)
Product Published Date:11/25/2021
Record Last Revised:12/09/2021
OMB Category:Other
Record ID: 353544