CS Colloquium: Uri Stemmer (Harvard)
Algorithmic Stability for Adaptive Data Analysis
Adaptivity is an important feature of data analysis - the choice of questions to ask about a dataset often depends on previous interactions with the same dataset. However, statistical validity is typically studied in a nonadaptive model, where all questions are specified before the dataset is drawn. Recent work by Dwork et al. (STOC, 2015) initiated the formal study of this problem, and gave the first upper bounds on the achievable generalization error for adaptive data analysis.
The results of Dwork et al. are based on a connection with algorithmic stability in the form of differential privacy. We extend their work by giving a quantitatively optimal, more general, and simpler proof of their main theorem that stable algorithms of the kind guaranteed by differential privacy imply low generalization error. We also show that weaker stability guarantees such as bounded KL divergence and total variation distance lead to correspondingly weaker generalization guarantees.
Joint work with Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, and Jonathan Ullman.
Bio: I am a post-doctoral fellow at the Center for Research on Computation and Society (CRCS), Harvard University. I completed my Ph.D. in computer science at Ben-Gurion University, where I was lucky to have Prof. Amos Beimel and Prof. Kobbi Nissim as my advisors. My research is focused on connections between learning theory and differential privacy, a line of work aimed at enabling rich analyses on sensitive individual data, while providing strong privacy guarantees for the individuals.
Thursday, October 12 at 11:00am