Classification Under Local Differential Privacy with Model Reversal and Model Averaging

Caihong Qin, Yang Bai.

Year: 2026, Volume: 27, Issue: 5, Pages: 1−44


Abstract

Local differential privacy has become a central topic in data privacy research, offering strong privacy guarantees by perturbing user data at the source and removing the need for a trusted curator. However, the noise introduced by local differential privacy often significantly reduces data utility. To address this issue, we reinterpret private learning under local differential privacy as a transfer learning problem, where the noisy data serve as the source domain and the unobserved clean data as the target. We propose novel techniques specifically designed for local differential privacy to improve classification performance without compromising privacy: (1) a noised binary feedback-based evaluation mechanism for estimating dataset utility; (2) model reversal, which salvages underperforming classifiers by inverting their decision boundaries; and (3) model averaging, which assigns weights to multiple reversed classifiers based on their estimated utility. We provide theoretical excess risk bounds under local differential privacy and demonstrate how our methods reduce this risk. Empirical results on both simulated and real-world datasets show substantial improvements in classification accuracy.

PDF BibTeX