Paper

Provably Robust Cost-Sensitive Learning via Randomized Smoothing

We study the problem of robust learning against adversarial perturbations under cost-sensitive scenarios, where the potential harm of different types of misclassifications is encoded in a cost matrix. Existing approaches are either empirical and cannot certify robustness or suffer from inherent scalability issues. In this work, we investigate whether randomized smoothing, a scalable framework for robustness certification, can be leveraged to certify and train for cost-sensitive robustness. Built upon the notion of cost-sensitive certified radius, we first illustrate how to adapt the standard certification algorithm of randomized smoothing to produce tight robustness certificates for any binary cost matrix, and then develop a robust training method to promote certified cost-sensitive robustness while maintaining the model's overall accuracy. Through extensive experiments on image benchmarks, we demonstrate the superiority of our proposed certification algorithm and training method under various cost-sensitive scenarios. Our implementation is available as open source code at: https://github.com/TrustMLRG/CS-RS.

Results in Papers With Code
(↓ scroll down to see all results)