## Optimising machine learning classification for statistical outcomes

### Keno Krewer^{*} ^{1}, Roman Höhn^{1}

#### Abstract

Statistics are aggregated properties of items belonging to a certain class. (Example: Consumer price indices are aggregated price changes of items classed as consumer goods.) Machine learning models can classify items for statistics. Typically, the machine models the probability for each item to belong to each class, and picks the most probable class. Hence the machine minimises the probability of misclassification. However, different misclassifications impact the statistics outcome differently. (Example: Misclassifying a pear as an apple hardly impacts the average price of "apples" compared to misclassifying an i-Phone as an apple.) Minimising the severity of misclassification improves the statistical outcome over minimising the probability of misclassification. Minimising the severity of misclassification requires understanding how misclassifications impact the respective statistic, typically in the form of a cost matrix. We have developed a framework to assess how misclassification impact statistics based on a hierarchical classification and specifically how misclassifications impact the consumer price indices for scanner data. We start by realising that a misclassification impacts the statistics of two classes: The donor-class, to which the item should belong to, and the acceptor-class, that the item is missasigned to. We illustrate deriving a cost matrix with the example of the multilateral price index calculation from scanner data.

This "bottom level" cost matrix optimises the outcomes of the individual classes (e.g. the price indices of apples and pears). However, it does not optimise the aggregate results of the classes (e.g. the price index of fruits). We illustrate how aggregates of donor respectively acceptor are affected by misclassification errors until the lowest common aggregate of donor and acceptor is reached. For common aggregates, the errors fully or partially cancel, depending on outcome and aggregation. In particular, we illustrate that removing uncertain items may benefit statistics of individual classes, but may trouble aggregate statistics.*: Speaker