2026-04-03
Abstract
Differential privacy (DP) has become a gold standard to analyze the extent to which machine learning algorithms are privacy-preserving. However, it remains unclear what levels of privacy, possibly measured by values for ε and δ in the widely used \((\epsilon,\delta)\)-DP framework, are appropriate in a given application. While one approach is to explain to human domain experts the main principles of DP, this often doesn’t allow them to accurately answer the question of what are appropriate parameter values. In this paper, we take another approach: we observe common human behavior and attempt to measure how privacy preserving it is. Doing so can give us insight in what levels of privacy protection humans find acceptable and at the same time leads to a number of interesting questions related to assessing the statistical privacy of black box processes.
Prépublication (soon)