Aleksandr (Sasha) Podkopaev

Aleksandr (Sasha) Podkopaev

PhD Student at Carnegie Mellon University

Statistics & Data Science Department

Machine Learning Department

About me

Hey there! I’m a 5th year PhD student in the joint program between Statistics & Data Science and Machine Learning departments at Carnegie Mellon University. I’m fortunate to be advised by Professor Aaditya Ramdas. We focus on topics related to building reliable ML systems: distribution-free uncertainty quantification (conformal prediction, calibration), detecting and handling distribution drifts. Recently, I have also been looking into topics related to safe, anytime-valid inference (valid inference after peeking at observed data). Before joining CMU, I obtained BSc and MSc degrees at Moscow Institute of Physics and Technology and Skoltech.

(!!!) I am currently actively seeking for industry research positions with start date in Summer–Fall, 2023.

Interests
  • Distribution-free uncertainty quantification (conformal prediction, calibration)
  • Distribution drifts
  • Sequential testing and safe, anytime-valid inference
Education
  • PhD in Statistics and Machine Learning, in progress

    Carnegie Mellon University

  • MSc in Applied Mathematics, 2018

    Skolkovo Institute of Science and Technology, Moscow Institute of Physics and Technology

  • BSc in Applied Mathematics, 2016

    Moscow Institute of Physics and Technology

Publications

Tracking the risk of a deployed model and detecting harmful distribution shifts

When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain – but not all – distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially, making interventions by a human expert (or model retraining) unnecessary. While several works have developed tests for distribution shifts, these typically either use non-sequential methods, or detect arbitrary shifts (benign or harmful), or both. We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate. In this work, we design simple sequential tools for testing if the difference between source (training) and target (test) distributions leads to a significant drop in a risk function of interest, like accuracy or calibration. Recent advances in constructing time-uniform confidence sequences allow efficient aggregation of statistical evidence accumulated during the tracking process. The designed framework is applicable in settings where (some) true labels are revealed after the prediction is performed, or when batches of labels become available in a delayed fashion. We demonstrate the efficacy of the proposed framework through an extensive empirical study on a collection of simulated and real datasets.

Contact

  • podkopaev AT cmu DOT edu