Calibration-then-Calculation: A Variance Reduced Metric Framework in Deep Click-Through Rate Prediction Models

30 Jan 2024  ·  Yewen Fan, Nian Si, Xiangchen Song, Kun Zhang ·

The adoption of deep learning across various fields has been extensive, yet there is a lack of focus on evaluating the performance of deep learning pipelines. Typically, with the increased use of large datasets and complex models, the training process is run only once and the result is compared to previous benchmarks. This practice can lead to imprecise comparisons due to the variance in neural network evaluation metrics, which stems from the inherent randomness in the training process. Traditional solutions, such as running the training process multiple times, are often infeasible due to computational constraints. In this paper, we introduce a novel metric framework, the Calibrated Loss Metric, designed to address this issue by reducing the variance present in its conventional counterpart. Consequently, this new metric enhances the accuracy in detecting effective modeling improvements. Our approach is substantiated by theoretical justifications and extensive experimental validations within the context of Deep Click-Through Rate Prediction Models.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods