Abstract
Machine unlearning addresses the problem of efficiently removing the influence of individual data points from trained models, enabling users to exercise their right to be forgotten while retaining the generalization capabilities of the model. Although significant progress has been made in low-dimensional regimes where the number of parameters ( denotes the number of samples), serious theoretical challenges remain in high-dimensions where are large while the ratio is fixed. In this work, we analyze the performance of a widely used unlearning algorithm based on a few steps of the Newton method in the high-dimensional setting described above. We introduce -Gaussian certifiability, a notion particularly well-suited to high-dimensional regimes. Our analysis shows that a single Newton step, followed by properly calibrated Gaussian noise, is sufficient to achieve both privacy and accuracy in this setting. This result stands in sharp contrast to the only prior work that analyzes machine unlearning in high dimensions—\cite{zou2025certified}—which operates under the stricter notion of -certifiability. That work concludes that a single Newton step is insufficient—even for removing a single datapoint—and that at least two steps are required to ensure both privacy and accuracy. Our result suggests that such a conclusion stems from specific features of the -certifiability definition, rather than being an inherent limitation of high-dimensional settings.
Bio
Aaradhya Pandey is a fourth-year doctoral candidate in the Department of Operations Research and Financial Engineering at Princeton University, supervised by Professor Sanjeev Kulkarni. He received his B.Sc. in Mathematics (Gold Medal, 2021) from the Indian Institute of Science, where he focused on probability theory. His research interests encompass high-dimensional statistics, machine learning, and spin-glass theory.