Aleksandr (Sasha) Podkopaev

Aleksandr (Sasha) Podkopaev

PhD Student at Carnegie Mellon University

Statistics & Data Science Department

Machine Learning Department

About me

Hey there! I’m a 4th year PhD student in the joint program between Statistics & Data Science and Machine Learning departments at Carnegie Mellon University. I’m fortunate to be advised by Professor Aaditya Ramdas. We consider the topics around building reliable ML systems, and in particular, distribution-free uncertainty quantification (conformal prediction, calibration) and adaptations to the presence of distribution shifts. Before joining CMU, I obtained BSc and MSc degrees at Moscow Institute of Physics and Technology and Skoltech.

Interests
  • Distribution-free Uncertainty Quantification (Conformal prediction, calibration)
  • Distribution shifts
Education
  • PhD in Statistics and Machine Learning, in progress

    Carnegie Mellon University

  • MSc in Applied Mathematics, 2018

    Skolkovo Institute of Science and Technology, Moscow Institute of Physics and Technology

  • BSc in Applied Mathematics, 2016

    Moscow Institute of Physics and Technology

Publications

Tracking the risk of a deployed model and detecting harmful distribution shifts

When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain – but not all – distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially, making interventions by a human expert (or model retraining) unnecessary. While several works have developed tests for distribution shifts, these typically either use non-sequential methods, or detect arbitrary shifts (benign or harmful), or both. We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate. In this work, we design simple sequential tools for testing if the difference between source (training) and target (test) distributions leads to a significant drop in a risk function of interest, like accuracy or calibration. Recent advances in constructing time-uniform confidence sequences allow efficient aggregation of statistical evidence accumulated during the tracking process. The designed framework is applicable in settings where (some) true labels are revealed after the prediction is performed, or when batches of labels become available in a delayed fashion. We demonstrate the efficacy of the proposed framework through an extensive empirical study on a collection of simulated and real datasets.

Contact

  • podkopaev AT cmu DOT edu