Aleksandr (Sasha) Podkopaev

Aleksandr (Sasha) Podkopaev

PhD Student at Carnegie Mellon University

Statistics & Data Science Department

Machine Learning Department

About me

Hey there! I’m a 5th year PhD student in the joint program between Statistics & Data Science and Machine Learning departments at Carnegie Mellon University. I’m fortunate to be advised by Professor Aaditya Ramdas. We focus on topics related to building reliable ML systems: distribution-free uncertainty quantification (conformal prediction, calibration), detecting and handling distribution drifts. Recently, I have also been looking into topics related to safe, anytime-valid inference (valid inference after peeking at observed data). Before joining CMU, I obtained BSc and MSc degrees at Moscow Institute of Physics and Technology and Skoltech.

(!!!) I am actively seeking for industry research positions with start date in Summer–Fall, 2023.

Interests
  • Sequential testing / safe, anytime-valid inference
  • Distribution-free uncertainty quantification (conformal prediction, calibration)
  • Distribution shifts
Education
  • PhD in Statistics and Machine Learning, in progress

    Carnegie Mellon University

  • MSc in Applied Mathematics, 2018

    Skolkovo Institute of Science and Technology, Moscow Institute of Physics and Technology

  • BSc in Applied Mathematics, 2016

    Moscow Institute of Physics and Technology

Publications

Sequential kernelized independence testing

Independence testing is a fundamental and classical statistical problem that has been extensively studied in the batch setting when one fixes the sample size before collecting data. However, practitioners often prefer procedures that adapt to the complexity of a problem at hand instead of setting sample size in advance. Ideally, such procedures should (a) allow stopping earlier on easy tasks (and later on harder tasks), hence making better use of available resources, and (b) continuously monitor the data and efficiently incorporate statistical evidence after collecting new data, while controlling the false alarm rate. It is well known that classical batch tests are not tailored for streaming data settings, since valid inference after data peeking requires correcting for multiple testing, but such corrections generally result in low power. In this paper, we design sequential kernelized independence tests (SKITs) that overcome such shortcomings based on the principle of testing by betting. We exemplify our broad framework using bets inspired by kernelized dependence measures such as the Hilbert-Schmidt independence criterion (HSIC) and the constrained-covariance criterion (COCO). Importantly, we also generalize the framework to non-i.i.d. time-varying settings, for which there exist no batch tests. We demonstrate the power of our approaches on both simulated and real data.

Tracking the risk of a deployed model and detecting harmful distribution shifts

When deployed in the real world, machine learning models inevitably encounter changes in the data distribution, and certain – but not all – distribution shifts could result in significant performance degradation. In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially, making interventions by a human expert (or model retraining) unnecessary. While several works have developed tests for distribution shifts, these typically either use non-sequential methods, or detect arbitrary shifts (benign or harmful), or both. We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate. In this work, we design simple sequential tools for testing if the difference between source (training) and target (test) distributions leads to a significant drop in a risk function of interest, like accuracy or calibration. Recent advances in constructing time-uniform confidence sequences allow efficient aggregation of statistical evidence accumulated during the tracking process. The designed framework is applicable in settings where (some) true labels are revealed after the prediction is performed, or when batches of labels become available in a delayed fashion. We demonstrate the efficacy of the proposed framework through an extensive empirical study on a collection of simulated and real datasets.

Contact

  • podkopaev AT cmu DOT edu