Events Conference on Foundations and Advances of Machine Learning in Official Statistics, 3rd to 5th April, 2024

Session 3.3 Methology I

Evaluating machine learning models in non-standard settings: An overview and new findings

Roman Hornung* 1, Malte Nalenz2, Lennart Schneider1, Andreas Bender1, Ludwig Bothmann1, Bernd Bischl1, Thomas Augustin2, Anne-Laure Boulesteix1

Abstract

Estimating the generalization error (GE) of machine learning models is fundamental, with resampling methods being the most common approach. However, in non-standard settings, particularly those where observations are not independently and identically distributed, resampling using simple random data divisions may lead to biased GE estimates. This talk strives to present well-grounded guidelines for GE estimation in various such non-standard settings: clustered data, spatial data, unequal sampling probabilities, concept drift, and hierarchically structured outcomes. Our overview combines well-established methodologies with other existing methods that, to our knowledge, have not been frequently considered in these particular settings. A unifying principle among these techniques is that the test data used in each iteration of the resampling procedure should reflect the new observations to which the model will be applied, while the training data should be representative of the entire data set used to obtain the final model. Our guidelines are based on both existing literature as well as our own simulation studies. These studies assess the necessity of using GE-estimation methods tailored to the respective setting. Our findings corroborate the concern that standard resampling methods often yield biased GE estimates in non-standard settings, underscoring the importance of tailored GE estimation.

*: Speaker

1: LMU Munich, Munich Center for Machine Learning - Germany

2: LMU Munich - Germany