Differential PrivacyWebsite for the differential privacy research community
https://differentialprivacy.org
A Better Privacy Analysis of the Exponential Mechanism<p>A basic and frequent task in data analysis is <em>selection</em> – given a set of options \(\mathcal{Y}\), output the (approximately) best one, where “best” is defined by some loss function \(\ell : \mathcal{Y} \times \mathcal{X}^n \to \mathbb{R}\) and a dataset \(x \in \mathcal{X}^n\). That is, we want to output some \(y \in \mathcal{Y}\) that approximately minimizes \(\ell(y,x)\). Naturally, we are interested in <em>private selection</em> – i.e., the output should be differentially private in terms of the dataset \(x\).
This post discusses algorithms for private selection – in particular, we give an improved privacy analysis of the popular exponential mechanism.</p>
<h2 id="the-exponential-mechanism">The Exponential Mechanism</h2>
<p>The most well-known algorithm for private selection is the <a href="https://en.wikipedia.org/wiki/Exponential_mechanism_(differential_privacy)"><em>exponential mechanism</em></a> <a href="https://doi.org/10.1109/FOCS.2007.66" title="Frank McSherry, Kunal Talwar. Mechanism Design via Differential Privacy. FOCS 2007."><strong>[MT07]</strong></a>. The exponential mechanism \(M : \mathcal{X}^n \to \mathcal{Y} \) is a randomized algorithm given by \[\forall x \in \mathcal{X}^n ~ \forall y \in \mathcal{Y} ~~~~~ \mathbb{P}[M(x) = y] = \frac{\exp(-\frac{\varepsilon}{2\Delta} \ell(y,x))}{\sum_{y’ \in \mathcal{Y}} \exp(-\frac{\varepsilon}{2\Delta} \ell(y’,x)) }, \tag{1}\] where \(\Delta\) is the sensitivity of the loss function \(\ell\) given by \[\Delta = \sup_{x,x’ \in \mathcal{X}^n : d(x,x’) \le 1} \max_{y\in\mathcal{Y}} |\ell(y,x) - \ell(y,x’)|,\tag{2}\] where the supremum is taken over all datasets \(x\) and \(x’\) differing on the data of a single individual (which we denote by \(d(x,x’)\le 1\)).</p>
<p>In terms of utility, we can easily show that <a href="https://arxiv.org/abs/1511.02513" title="Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, Jonathan Ullman. Algorithmic Stability for Adaptive Data Analysis. STOC 2016."><strong>[BNSSSU16]</strong></a> \[\mathbb{E}[\ell(M(x),x)] \le \min_{y \in \mathcal{Y}} \ell(y,x) + \frac{2\Delta}{\varepsilon} \log |\mathcal{Y}|\] for all \(x \in \mathcal{X}^n\) (and we can also give high probability bounds).</p>
<p>It is easy to show that the exponential mechanism satisfies \(\varepsilon\)-differential privacy.
But there is more to this story! We’re going to look at a more refined privacy analysis.</p>
<h2 id="bounded-range">Bounded Range</h2>
<p>The privacy guarantee of the exponential mechanism is more precisely characterized by <em>bounded range</em>. This was observed and defined by David Durfee and Ryan Rogers <a href="https://arxiv.org/abs/1905.04273" title="David Durfee, Ryan Rogers. Practical Differentially Private Top-k Selection with Pay-what-you-get Composition. NeurIPS 2019"><strong>[DR19]</strong></a> and further analyzed later <a href="https://arxiv.org/abs/1909.13830" title="Jinshuo Dong, David Durfee, Ryan Rogers. Optimal Differential Privacy Composition for Exponential Mechanisms. ICML 2020."><strong>[DDR20]</strong></a>.</p>
<blockquote>
<p><strong>Definition 1 (Bounded Range).</strong><sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>
A randomized algorithm \(M : \mathcal{X}^n \to \mathcal{Y}\) satisfies \(\eta\)-bounded range if, for all pairs of inputs \(x, x’ \in \mathcal{X}^n\) differing only on the data of a single individual, there exists some \(t \in \mathbb{R}\) such that \[\forall y \in \mathcal{Y} ~~~~~ \log\left(\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]}\right) \in [t, t+\eta].\] Here \(t\) may depend on the pair of input datasets \(x,x’\), but not on the output \(y\).</p>
</blockquote>
<p>To interpret this definition, we <a href="/flavoursofdelta/">recall the definition of the privacy loss random variable</a>: Define \(f : \mathcal{Y} \to \mathbb{R}\) by \[f(y) = \log\left(\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]}\right).\] Then the privacy loss random variable \(Z \gets \mathsf{PrivLoss}(M(x)\|M(x’))\) is given by \(Z = f(M(x))\).</p>
<p>Pure \(\varepsilon\)-differential privacy is equivalent to demanding that the privacy loss is bounded by \(\varepsilon\) – i.e., \(\mathbb{P}[|Z|\le\varepsilon]=1\). Approximate \((\varepsilon,\delta)\)-differential privacy is, roughly, equivalent to demanding that \(\mathbb{P}[Z\le\varepsilon]\ge1-\delta\).<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup></p>
<p>Now \(\eta\)-bounded range is simply demanding that the privacy loss \(Z\) is supported on some interval of length \(\eta\). This interval \([t,t+\eta]\) may depend on the pair \(x,x’\).</p>
<p>Bounded range and pure differential privacy are equivalent up to a factor of 2 in the parameters:</p>
<blockquote>
<p><strong>Lemma 2 (Bounded Range versus Pure Differential Privacy).</strong></p>
<ul>
<li>\(\varepsilon\)-differential privacy implies \(\eta\)-bounded range with \(\eta \le 2\varepsilon\).</li>
<li>\(\eta\)-bounded range implies \(\varepsilon\)-differential privacy with \(\varepsilon \le \eta\).</li>
</ul>
</blockquote>
<p><em>Proof.</em> The first part of the equivalence follows from the fact that pure \(\varepsilon\)-differential privacy implies the privacy loss is supported on the interval \([-\varepsilon,\varepsilon]\). Thus, if we set \(t=-\varepsilon\) and \(\eta=2\varepsilon\), then \([t,t+\eta] = [-\varepsilon,\varepsilon]\).
The second part follows from the fact that the support of the privacy loss \([t,t+\eta]\) must straddle \(0\). That is, the privacy loss cannot be always positive nor always negative, so \(0 \in [t,t+\eta]\) and, hence, \([t,t+\eta] \subseteq [-\eta,\eta]\). Otherwise \(\forall y ~ f(y)>0\) or \(\forall y ~ f(y)<0\) would imply \(\forall y ~ \mathbb{P}[M(x)=y]>\mathbb{P}[M(x’)=y]\) or \(\forall y ~ \mathbb{P}[M(x)=y]<\mathbb{P}[M(x’)=y]\), contradicting the fact that \(\sum_{y \in \mathcal{Y}} \mathbb{P}[M(x)=y] = 1\) and \(\sum_{y \in \mathcal{Y}} \mathbb{P}[M(x’)=y] = 1\). ∎</p>
<p>OK, back to the exponential mechanism:</p>
<blockquote>
<p><strong>Lemma 3 (The Exponential Mechanism is Bounded Range).</strong>
The exponential mechanism (given in Equation 1 above) satisfies \(\varepsilon\)-bounded range .<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup></p>
</blockquote>
<p><em>Proof.</em>
We have \[e^{f(y)} = \frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} = \frac{\exp(-\frac{\varepsilon}{2\Delta}\ell(y,x))}{\exp(-\frac{\varepsilon}{2\Delta}\ell(y,x’))} \cdot \frac{\sum_{y’} \exp(-\frac{\varepsilon}{2\Delta} \ell(y’,x’))}{\sum_{y’} \exp(-\frac{\varepsilon}{2\Delta} \ell(y’,x))}.\]
Setting \(t = \log\left(\frac{\sum_{y’} \exp(-\frac{\varepsilon}{2\Delta} \ell(y’,x’))}{\sum_{y’} \exp(-\frac{\varepsilon}{2\Delta} \ell(y’,x))}\right) - \frac{\varepsilon}{2}\), we have \[ f(y) = \frac{\varepsilon}{2\Delta} (\ell(y,x’)-\ell(y,x)+\Delta) + t.\]
By the definition of sensitivity (given in Equation 2), we have \( 0 \le \ell(y,x’)-\ell(y,x)+\Delta \le 2\Delta\), whence \(t \le f(y) \le t + \varepsilon\). ∎</p>
<p>Bounded range is not really a useful privacy definition on its own. Thus we’re going to relate it to a relaxed version of differential privacy next.</p>
<h2 id="concentrated-differential-privacy">Concentrated Differential Privacy</h2>
<p>Concentrated differential privacy <a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. TCC 2016."><strong>[BS16]</strong></a> and its variants <a href="https://arxiv.org/abs/1603.01887" title="Cynthia Dwork, Guy N. Rothblum. Concentrated Differential Privacy. 2016."><strong>[DR16]</strong></a> <a href="https://arxiv.org/abs/1702.07476" title="Ilya Mironov. Rényi Differential Privacy. CCS 2017."><strong>[M17]</strong></a> are relaxations of pure differential privacy with many nice properties. In particular, it composes very cleanly.</p>
<blockquote>
<p><strong>Definition 4 (Concentrated Differential Privacy).</strong>
A randomized algorithm \(M : \mathcal{X}^n \to \mathcal{Y}\) satisfies \(\rho\)-concentrated differential privacy if, for all pairs of inputs \(x, x’ \in \mathcal{X}^n\) differing only on the data of a single individual,
\[\forall \lambda > 0 ~~~~~ \mathbb{E}[\exp( \lambda Z)] \le \exp(\lambda(\lambda+1)\rho),\tag{3}\]
where \(Z \gets \mathsf{PrivLoss}(M(x)\|M(x’))\) is the privacy loss random variable.<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup></p>
</blockquote>
<p>Intuitively, concentrated differential privacy requires that the privacy loss is subgaussian. Specifically, the bound on the moment generating function of \(\rho\)-concentrated differential privacy is tight if the privacy loss \(Z\) follows the distribution \(\mathcal{N}(\rho,2\rho)\). Indeed, the privacy loss random variable of the Gaussian mechanism has such a distribution.<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">5</a></sup></p>
<p>OK, back to the exponential mechanism:
We know that \(\varepsilon\)-differential privacy implies \(\frac12 \varepsilon^2\)-concentrated differential privacy <a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. TCC 2016."><strong>[BS16]</strong></a>.
This, of course, applies to the exponential mechaism. A cool fact – that we want to draw more attention to – is that we can do better!
Specifically, \(\eta\)-bounded range implies \(\frac18 \eta^2\)-concentrated differential privacy <a href="https://arxiv.org/abs/2004.07223" title="Mark Cesar, Ryan Rogers. Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics. ALT 2021."><strong>[CR21]</strong></a>.
What follows is a proof of this fact following that of Mark Cesar and Ryan Rogers, but with some simplification.</p>
<blockquote>
<p><strong>Theorem 5 (Bounded Range implies Concentrated Differential Privacy).</strong>
If \(M\) is \(\eta\)-bounded range, then it is \(\frac18\eta^2\)-concentrated differentially private.</p>
</blockquote>
<p><em>Proof.</em>
Fix datasets \(x,x’ \in \mathcal{X}^n\) differing on a single individual’s data.
Let \(Z \gets \mathsf{PrivLoss}(M(x)\|M(x’))\) be the privacy loss random variable of the mechanism \(M\) on this pair of datasets.
By the definition of bounded range (Definition 1), there exists some \(t \in \mathbb{R}\) such that \(Z \in [t, t+\eta]\) with probability 1.
Now we employ <a href="https://en.wikipedia.org/wiki/Hoeffding%27s_lemma">Hoeffding’s Lemma</a> <a href="https://doi.org/10.1080%2F01621459.1963.10500830" title="Wassily Hoeffding. Probability inequalities for sums of bounded random variables. JASA 1963."><strong>[H63]</strong></a>:</p>
<blockquote>
<p><strong>Lemma 6 (Hoeffding’s Lemma).</strong>
Let \(X\) be a random variable supported on the interval \([a,b]\). Then, for all \(\lambda \in \mathbb{R}\), we have \[\mathbb{E}[\exp(\lambda X)] \le \exp \left( \mathbb{E}[X] \cdot \lambda + \frac{(b-a)^2}{8} \cdot \lambda^2 \right).\]</p>
</blockquote>
<p>Applying the lemma to the privacy loss gives \[\forall \lambda \in \mathbb{R} ~~~~~ \mathbb{E}[\exp(\lambda Z)] \le \exp \left( \mathbb{E}[Z] \cdot \lambda + \frac{\eta^2}{8} \cdot \lambda^2 \right).\]
The only remaining thing we need is to show is that \(\mathbb{E}[Z] \le \frac18 \eta^2\).<sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="footnote" rel="footnote">6</a></sup></p>
<p>If we set \(\lambda = -1 \), then we get \( \mathbb{E}[\exp( - Z)] \le \exp \left( -\mathbb{E}[Z] + \frac{\eta^2}{8} \right)\), which rearranges to \(\mathbb{E}[Z] \le \frac18 \eta^2 - \log \mathbb{E}[\exp( - Z)]\).
Now we have \[ \mathbb{E}[\exp( - Z)] \!=\! \sum_y \mathbb{P}[M(x)\!=\!y] \exp(-f(y)) \!=\! \sum_y \mathbb{P}[M(x)\!=\!y] \!\cdot\! \frac{\mathbb{P}[M(x’)\!=\!y]}{\mathbb{P}[M(x)\!=\!y]} \!=\! 1.\]
∎</p>
<p>This brings us to the TL;DR of this post:</p>
<blockquote>
<p><strong>Corollary 7.</strong> The exponential mechanism (given by Equation 1) is \(\frac18 \varepsilon^2\)-concentrated differentially private.</p>
</blockquote>
<p>This is great news. The standard analysis only gives \(\frac12 \varepsilon^2\)-concentrated differential privacy. Constants matter when applying differential privacy, and we save a factor of 4 in the concentrated differential privacy analysis of the exponential mechanism for free with this improved analysis.</p>
<p>Combining Lemma 2 with Theorem 5 also gives a simpler proof of the conversion from pure differential privacy to concentrated differential privacy <a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. TCC 2016."><strong>[BS16]</strong></a>:</p>
<blockquote>
<p><strong>Corollary 8.</strong> \(\varepsilon\)-differential privacy implies \(\frac12 \varepsilon^2\)-concentrated differential privacy.</p>
</blockquote>
<h2 id="beyond-the-exponential-mechanism">Beyond the Exponential Mechanism</h2>
<p>The exponential mechanism is not the only algorithm for private selection. A closely-related algorithm is <em>report noisy max/min</em>:<sup id="fnref:7" role="doc-noteref"><a href="#fn:7" class="footnote" rel="footnote">7</a></sup> Draw independent noise \(\xi_y\) from some distribution for each \(y \in \mathcal{Y}\) then output \[M(x) = \underset{y \in \mathcal{Y}}{\mathrm{argmin}} ~ \ell(y,x) - \xi_y.\]</p>
<p>If the noise distribution is an appropriate <a href="https://en.wikipedia.org/wiki/Gumbel_distribution">Gumbel distribution</a>, then report noisy max is exactly the exponential mechanism. (This equivalence is known as the “Gumbel max trick.”)</p>
<p>We can also use the Laplace distribution or the exponential distribution. Report noisy max with the exponential distribution is equivalent to the <em>permute and flip</em> algorithm <a href="https://arxiv.org/abs/2010.12603" title="Ryan McKenna, Daniel Sheldon. Permute-and-Flip: A new mechanism for differentially private selection
. NeurIPS 2020."><strong>[MS20]</strong></a> <a href="https://arxiv.org/abs/2105.07260" title="Zeyu Ding, Daniel Kifer, Sayed M. Saghaian N. E., Thomas Steinke, Yuxin Wang, Yingtai Xiao, Danfeng Zhang. The Permute-and-Flip Mechanism is Identical to Report-Noisy-Max with Exponential Noise. 2021."><strong>[DKSSWXZ21]</strong></a>. However, these algorithms don’t enjoy the same improved bounded range and concentrated differential privacy guarantees as the exponential mechanism.</p>
<p>There are also other variants of the selection problem. For example, in some cases we can assume that only a few options have low loss and the rest of the options have high loss – i.e., there is a gap between the minimum loss and the second-lowest loss (or, more generally, the \(k\)-th lowest loss). In this case there are algorithms that attain better accuracy than the exponential mechanism under relaxed privacy definitions <a href="https://arxiv.org/abs/1409.2177" title="Kamalika Chaudhuri, Daniel Hsu, Shuang Song. The Large Margin Mechanism for Differentially Private Maximization. NIPS 2014."><strong>[CHS14]</strong></a> <a href="https://dl.acm.org/doi/10.1145/3188745.3188946" title=" Mark Bun, Cynthia Dwork, Guy N. Rothblum, Thomas Steinke. Composable and versatile privacy via truncated CDP. STOC 2018."><strong>[BDRS18]</strong></a> <a href="https://arxiv.org/abs/1905.13229" title="Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu. Private Hypothesis Selection. NeurIPS 2019."><strong>[BKSW19]</strong></a>.</p>
<p>There are a lot of interesting aspects of private selection, including questions for further research! We hope to have further posts about some of these topics.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>For simplicity, we restrict our discussion here to finite sets of outputs, although the definitions, algorithms, and results can be extended to infinite sets. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>To be more precise, \((\varepsilon,\delta)\)-differential privacy is equivalent to demanding that \(\mathbb{E}[\max\{0,1-\exp(\varepsilon-Z)\}]\le\delta\) <a href="https://arxiv.org/abs/2004.00010" title="Clément L. Canonne, Gautam Kamath, Thomas Steinke. The Discrete Gaussian for Differential Privacy. NeurIPS 2020."><strong>[CKS20]</strong></a>. (To be completely precise, we must appropriately deal with the \(Z=\infty\) case, which we ignore in this discussion for simplicity.) <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>This proof actually gives <a href="https://dongjs.github.io/2020/02/10/ExpMech.html">a slightly stronger result</a>: We can replace the sensitivity \(\Delta\) (defined in Equation 2) by half the range \[\hat\Delta = \frac12 \sup_{x,x’ \in \mathcal{X}^n : d(x,x’) \le 1} \left( \max_{\overline{y}\in\mathcal{Y}} \ell(\overline{y},x) - \ell(\overline{y},x’) - \min_{\underline{y}\in\mathcal{Y}} \ell(\underline{y},x) - \ell(\underline{y},x’) \right).\] We always have \(\hat\Delta \le \Delta\) but it is possible that \(\hat\Delta < \Delta\) and the privacy analysis of the exponential mechanism still works if we replace \(\Delta\) by \(\hat\Delta\). <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Equivalently, a randomized algorithm \(M : \mathcal{X}^n \to \mathcal{Y}\) satisfies \(\rho\)-concentrated differential privacy if, for all pairs of inputs \(x, x’ \in \mathcal{X}^n\) differing only on the data of a single individual, \[\forall \lambda > 0 ~~~~~ \mathrm{D}_{\lambda+1}(M(x)\|M(x’)) \le \lambda(\lambda+1)\rho,\] where \(\mathrm{D}_{\lambda+1}(M(x)\|M(x’)))\) is the order \(\lambda+1\) Rényi divergence of \(M(x)\) from \(M(x’)\). <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>To be precise, if \(M(x) = q(x) + \mathcal{N}(0,\sigma^2I)\), then \(M : \mathcal{X}^n \to \mathbb{R}^d\) satisfies \(\frac{\Delta_2^2}{2\sigma^2}\)-concentrated differential privacy, where \(\Delta_2 = \sup_{x,x’\in\mathcal{X}^n : d(x,x’)\le1} \|q(x)-q(x’)\|_2\) is the 2-norm sensitivity of \(q:\mathcal{X}^n \to \mathbb{R}^d\). Furthermore, the privacy loss of the Gaussian mechanism is itself a Gaussian and it makes the inequality defining concentrated differential privacy (Equation 3) an equality for all \(\lambda\) <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:6" role="doc-endnote">
<p>Note that the expectation of the privacy loss is simply the <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">KL divergence</a>: \(\mathbb{E}[Z] = \mathrm{D}_1( M(x) \| M(x’) )\). <a href="#fnref:6" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:7" role="doc-endnote">
<p>We have presented selection here in terms of minimization, but most of the literature is in terms of maximization. <a href="#fnref:7" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Ryan RogersThomas SteinkeMon, 12 Jul 2021 10:00:00 -0700
https://differentialprivacy.org/exponential-mechanism-bounded-range/
https://differentialprivacy.org/exponential-mechanism-bounded-range/Open Problem - Optimal Query Release for Pure Differential Privacy<p>Releasing large sets of statistical queries is a centerpiece of the theory of differential privacy. Here, we are given a <em>dataset</em> \(x = (x_1,\dots,x_n) \in [T]^n\), and a set of <em>statistical queries</em> \(f_1,\dots,f_k\), where each query is defined by some bounded function \(f_j : [T] \to [-1,1]\), and (abusing notation) is defined as
\[
f_j(x) = \frac{1}{n} \sum_{i=1}^{n} f_j(x_i).
\]
We use \(f(x) = (f_1(x),\dots,f_k(x))\) to denote the vector consisting of the true answers to all these queries.
Our goal is to design an \((\varepsilon, \delta)\)-differentially private algorithm \(M\) that takes a dataset \(x\in [T]^n\) and outputs a random vector \(M(x)\in \mathbb{R}^k\) such that \(\| M(x) - f(x) \|\) is small in expectation for some norm \(\|\cdot\|\). Usually algorithms for this problem also give high probability bounds on the error, but we focus on expected error for simplicity.</p>
<p>This problem has been studied for both <em>pure differential privacy</em> (\(\delta = 0\)) and <em>appproximate differential privacy</em> (\(\delta > 0\)), and for both \(\ell_\infty\)-error
\[
\mathbb{E}( \| M(x) - f(x)\|_{\infty} ) \leq \alpha,
\]
and \(\ell_2\)-error
\[
\mathbb{E}( \| M(x) - f(x)\|_{2} ) \leq \alpha k^{1/2},
\]
giving four variants of the problem. By now we know tight worst-case upper and lower bounds for two of these variants, and nearly tight bounds (up to logarithmic factors) for a third. The tightest known upper bounds are given in the following table.</p>
<table>
<tbody>
<tr>
<td> </td>
<td>Pure DP</td>
<td>Approx DP</td>
</tr>
<tr>
<td>\( \ell_2 \)<br />error</td>
<td>\( \alpha \lesssim \left(\frac{\log^2 k ~\cdot~ \log^{3/2}T}{\varepsilon n} \right)^{1/2} \) <br /> [<a href="https://arxiv.org/abs/1212.0297">NTZ13</a>]</td>
<td>\( \alpha \lesssim \left(\frac{\log^{1/2} T}{\varepsilon n} \right)^{1/2} \) <br /> [<a href="https://guyrothblum.files.wordpress.com/2014/11/drv10.pdf">DRV10</a>]</td>
</tr>
<tr>
<td>\( \ell_\infty \)<br />error</td>
<td>\( \alpha \lesssim \left(\frac{\log k ~\cdot~ \log T}{\varepsilon n} \right)^{1/3} \) <br /> [<a href="https://arxiv.org/abs/1109.2229">BLR13</a>]</td>
<td>\( \alpha \lesssim \left(\frac{\log k ~\cdot~ \log^{1/2} T}{\varepsilon n} \right)^{1/2} \) <br /> [<a href="https://guyrothblum.files.wordpress.com/2014/11/hr10.pdf">HR10</a>, <a href="https://arxiv.org/abs/1107.3731">GRU12</a>]</td>
</tr>
</tbody>
</table>
<p>The bounds for approximate DP are known to be tight [<a href="https://arxiv.org/abs/1311.3158">BUV14</a>]. Our two open problems both involve improving the best known upper bounds for pure differential privacy.</p>
<blockquote>
<p><b>Open Problem 1:</b> What is the best possible \(\ell_\infty\)-error for answering a worst-case set of \(k\) statistical queries over a domain of size \(T\) subject to \((\varepsilon,0)\)-differential privacy?</p>
</blockquote>
<p>We conjecture that the known upper bound in the table can be improved to
\[
\alpha = \left(\frac{\log k \cdot \log T}{\varepsilon n} \right)^{1/2},
\]
which is known to be the best possible [<a href="https://dataspace.princeton.edu/handle/88435/dsp01vq27zn422">Har11</a>, Theorem 4.5.1].</p>
<blockquote>
<p><b>Open Problem 2:</b> What is the best possible \(\ell_2\)-error for answering a worst-case set of \(k\) statistical queries over a domain of size \(T\) subject to \((\varepsilon,0)\)-differential privacy?</p>
</blockquote>
<p>We conjecture that the upper bound can be improved to
\[
\alpha = \left(\frac{\log T}{\varepsilon n} \right)^{1/2}.
\]
The construction used in [<a href="https://dataspace.princeton.edu/handle/88435/dsp01vq27zn422">Har11</a>, Theorem 4.5.1] can be analyzed to show this bound would be tight. Note, in particular, that this conjecture implies that the tight upper bound has no dependence on the number of queries, similarly to the case of \(\ell_2\) error and approximate DP.</p>
Sasho NikolovJonathan UllmanWed, 07 Jul 2021 13:45:00 -0400
https://differentialprivacy.org/open-problem-optimal-query-release/
https://differentialprivacy.org/open-problem-optimal-query-release/Conference Digest - ICML 2021<p><a href="https://icml.cc/Conferences/2021">ICML 2021</a>, one of the biggest conferences in machine learning, naturally has a ton of interesting sounding papers on the topic of differential privacy.
We went through this year’s <a href="https://icml.cc/Conferences/2021/AcceptedPapersInitial">accepted papers</a> and aggregated all the relevant papers we could find.
In addition, this year features three workshops on the topic of privacy, as well as a tutorial.
As always, please inform us if we overlooked any papers on differential privacy.</p>
<h2 id="workshops">Workshops</h2>
<ul>
<li>
<p><a href="http://federated-learning.org/fl-icml-2021/">Federated Learning for User Privacy and Data Confidentiality</a></p>
</li>
<li>
<p><a href="https://sites.google.com/view/ml4data">Machine Learning for Data: Automated Creation, Privacy, Bias</a></p>
</li>
<li>
<p><a href="https://tpdp.journalprivacyconfidentiality.org/2021/">Theory and Practice of Differential Privacy</a></p>
</li>
</ul>
<h2 id="tutorial">Tutorial</h2>
<ul>
<li><a href="https://icml.cc/Conferences/2021/Schedule?showEvent=10839">Privacy in Learning: Basics and the Interplay</a><br />
<a href="https://www.microsoft.com/en-us/research/people/huzhang/">Huishuai Zhang</a>, <a href="https://www.microsoft.com/en-us/research/people/weic/">Wei Chen</a></li>
</ul>
<h2 id="papers">Papers</h2>
<ul>
<li>
<p><a href="https://arxiv.org/abs/2009.02668">A Framework for Private Matrix Analysis in Sliding Window Model</a><br />
<a href="https://sites.google.com/view/jalajupadhyay/home">Jalaj Upadhyay</a>, <a href="https://www.fujitsu.com/us/about/businesspolicy/tech/rd/research-staff/sarvagya.html">Sarvagya Upadhyay</a></p>
</li>
<li>
<p>Accuracy, Interpretability, and Differential Privacy via Explainable Boosting<br />
<a href="https://scholar.google.com/citations?user=HmxjgMAAAAAJ">Harsha Nori</a>, <a href="https://www.microsoft.com/en-us/research/people/rcaruana/">Rich Caruana</a>, <a href="https://sites.google.com/view/zhiqi-bu">Zhiqi Bu</a>, <a href="https://heyyjudes.github.io/">Judy Hanwen Shen</a>, <a href="https://www.microsoft.com/en-us/research/people/jakul/">Janardhan Kulkarni</a></p>
</li>
<li>
<p>Differentially Private Aggregation in the Shuffle Model: Almost Central Accuracy in Almost a Single Message<br />
<a href="https://sites.google.com/view/badihghazi/home">Badih Ghazi</a>, <a href="https://sites.google.com/site/ravik53/">Ravi Kumar</a>, <a href="https://pasin30055.github.io/">Pasin Manurangsi</a>, <a href="https://rasmuspagh.net/">Rasmus Pagh</a>, <a href="https://www.linkedin.com/in/amersinha/">Amer Sinha</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2011.00467">Differentially Private Bayesian Inference for Generalized Linear Models</a><br />
<a href="https://warwick.ac.uk/fac/sci/dcs/people/u1554597">Tejas Kulkarni</a>, <a href="https://users.aalto.fi/~jalkoj1/">Joonas Jälkö</a>, <a href="https://scholar.google.com/citations?user=Y_EvCPAAAAAJ">Antti Koskela</a>, <a href="https://people.aalto.fi/samuel.kaski">Samuel Kaski</a>, <a href="https://www.cs.helsinki.fi/u/ahonkela/">Antti Honkela</a></p>
</li>
<li>
<p>Differentially-Private Clustering of Easy Instances<br />
<a href="http://www.cohenwang.com/edith/">Edith Cohen</a>, <a href="http://www.cs.tau.ac.il/~haimk/">Haim Kaplan</a>, <a href="https://www.tau.ac.il/~mansour/">Yishay Mansour</a>, <a href="https://www.uri.co.il/">Uri Stemmer</a>, <a href="https://www.linkedin.com/in/eliad-tsfadia-21482b96/">Eliad Tsfadia</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2102.08885">Differentially Private Correlation Clustering</a><br />
<a href="https://cs-people.bu.edu/mbun/">Mark Bun</a>, <a href="https://elias.ba30.eu/">Marek Elias</a>, <a href="https://www.microsoft.com/en-us/research/people/jakul/">Janardhan Kulkarni</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2105.13287">Differentially Private Densest Subgraph Detection</a><br />
<a href="https://biocomplexity.virginia.edu/person/dung-nguyen">Dung Nguyen</a>, <a href="https://engineering.virginia.edu/faculty/anil-vullikanti">Anil Vullikanti</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2102.08244">Differentially Private Quantiles</a><br />
<a href="http://jgillenw.com/">Jennifer Gillenwater</a>, <a href="https://www.majos.net/">Matthew Joseph</a>, <a href="https://www.alexkulesza.com/">Alex Kulesza</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2103.06641">Differentially Private Query Release Through Adaptive Projection</a><br />
<a href="https://sergulaydore.github.io/">Sergul Aydore</a>, <a href="https://wibrown.github.io/">William Brown</a>, <a href="https://www.cis.upenn.edu/~mkearns/">Michael Kearns</a>, <a href="http://www-cs-students.stanford.edu/~kngk/">Krishnaram Kenthapadi</a>, <a href="https://www.lucamel.is/">Luca Melis</a>, <a href="https://www.cis.upenn.edu/~aaroth/">Aaron Roth</a>, <a href="https://ankitsiva.xyz/">Ankit Siva</a></p>
</li>
<li>
<p>Differentially Private Sliced Wasserstein Distance<br />
<a href="http://asi.insa-rouen.fr/enseignants/~arakoto/">Alain Rakotomamonjy</a>, <a href="https://pageperso.lif.univ-mrs.fr/~liva.ralaivola/doku.php">Liva Ralaivola</a></p>
</li>
<li>
<p>Large Scale Private Learning via Low-rank Reparametrization<br />
<a href="https://scholar.google.com/citations?user=FcRGdiwAAAAJ">Da Yu</a>, <a href="https://www.microsoft.com/en-us/research/people/huzhang/">Huishuai Zhang</a>, <a href="https://www.microsoft.com/en-us/research/people/weic/">Wei Chen</a>, Jian Yin, <a href="https://www.microsoft.com/en-us/research/people/tyliu/">Tie-Yan Liu</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2102.08598">Leveraging Public Data for Practical Private Query Release</a><br />
<a href="https://www.linkedin.com/in/terrance-liu-26796974/">Terrance Liu</a>, <a href="https://sites.google.com/umn.edu/giuseppe-vietri/home">Giuseppe Vietri</a>, <a href="http://www.thomas-steinke.net/">Thomas Steinke</a>, <a href="https://www.ccs.neu.edu/home/jullman/">Jonathan Ullman</a>, <a href="https://zstevenwu.com/">Steven Wu</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2104.09734">Locally Private k-Means in One Round</a><br />
Alisa Chang, <a href="https://sites.google.com/view/badihghazi/home">Badih Ghazi</a>, <a href="https://sites.google.com/site/ravik53/">Ravi Kumar</a>, <a href="https://pasin30055.github.io/">Pasin Manurangsi</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2102.12099">Lossless Compression of Efficient Private Local Randomizers</a><br />
<a href="http://vtaly.net/">Vitaly Feldman</a>, <a href="http://kunaltalwar.org/">Kunal Talwar</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2105.08233">Oneshot Differentially Private Top-k Selection</a><br />
<a href="https://lsa.umich.edu/stats/people/phd-students/qiaogang.html">Gang Qiao</a>, <a href="http://www-stat.wharton.upenn.edu/~suw/">Weijie Su</a>, <a href="https://research.google/people/LiZhang/">Li Zhang</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.12321">PAPRIKA: Private Online False Discovery Rate Control</a><br />
<a href="https://wanrongz.github.io/">Wanrong Zhang</a>, <a href="http://www.gautamkamath.com/">Gautam Kamath</a>, <a href="https://sites.gatech.edu/rachel-cummings/">Rachel Cummings</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2103.00039">Practical and Private (Deep) Learning without Sampling or Shuffling</a><br />
<a href="https://kairouzp.github.io/">Peter Kairouz</a>, <a href="https://research.google/people/author35837/">Brendan McMahan</a>, <a href="https://shs037.github.io/">Shuang Song</a>, <a href="http://www.omthakkar.com/">Om Thakkar</a>, <a href="https://athakurta.squarespace.com/">Abhradeep Thakurta</a>, <a href="https://research.google/people/106689/">Zheng Xu</a></p>
</li>
<li>
<p>Private Adaptive Gradient Methods for Convex Optimization<br />
<a href="http://web.stanford.edu/~asi/">Hilal Asi</a>, <a href="https://web.stanford.edu/~jduchi/">John Duchi</a>, <a href="https://afallah.lids.mit.edu/">Alireza Fallah</a>, <a href="https://scholar.google.com/citations?user=_JXjrEp9FhYC">Omid Javidbakht</a>, <a href="http://kunaltalwar.org/">Kunal Talwar</a></p>
</li>
<li>
<p>Private Alternating Least Squares: (Nearly) Optimal Privacy/Utility Trade-off for Matrix Completion<br />
Steve Chien, <a href="https://www.prateekjain.org/">Prateek Jain</a>, <a href="http://walid.krichene.net/">Walid Krichene</a>, <a href="https://scholar.google.com/citations?user=yR-ugIoAAAAJ">Steffen Rendle</a>, <a href="https://shs037.github.io/">Shuang Song</a>, <a href="https://athakurta.squarespace.com/">Abhradeep Thakurta</a>, <a href="https://research.google/people/LiZhang/">Li Zhang</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2103.01516">Private Stochastic Convex Optimization: Optimal Rates in L1 Geometry</a><br />
<a href="http://web.stanford.edu/~asi/">Hilal Asi</a>, <a href="http://vtaly.net/">Vitaly Feldman</a>, <a href="https://tomerkoren.github.io/">Tomer Koren</a>, <a href="http://kunaltalwar.org/">Kunal Talwar</a></p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2102.06387">The Distributed Discrete Gaussian Mechanism for Federated Learning with Secure Aggregation</a><br />
<a href="https://kairouzp.github.io/">Peter Kairouz</a>, <a href="https://kenziyuliu.github.io/">Ziyu Liu</a>, <a href="http://www.thomas-steinke.net/">Thomas Steinke</a></p>
</li>
</ul>
Gautam KamathMon, 07 Jun 2021 12:30:00 -0400
https://differentialprivacy.org/icml2021/
https://differentialprivacy.org/icml2021/Statistical Inference is Not a Privacy Violation<p>On April 28, 2021, the US Census Bureau <a href="https://www.census.gov/programs-surveys/decennial-census/decade/2020/planning-management/process/disclosure-avoidance/2020-das-updates.html">released</a> a new demonstration of its differentially private Disclosure Avoidance System (DAS) for the 2020 US Census. The public were given a month to submit feedback before the system is finalized.
This demonstration data and the feedback has generated a lot of discussion, including media coverage on <a href="https://www.npr.org/2021/05/19/993247101/for-the-u-s-census-keeping-your-data-anonymous-and-useful-is-a-tricky-balance">National Public Radio</a>, in <a href="https://www.washingtonpost.com/local/social-issues/2020-census-differential-privacy-ipums/2021/06/01/6c94b46e-c30d-11eb-93f5-ee9558eecf4b_story.html">the Washington Post</a>, and via <a href="https://apnews.com/article/business-census-2020-technology-e701e313e841674be6396321343b7e49">the Associated Press</a>. The DAS is also the subject of an <a href="https://www.courtlistener.com/docket/59728874/state-v-united-states-department-of-commerce/">ongoing lawsuit</a>.</p>
<p>The following is a response from experts on differential privacy and cryptography to the <a href="https://alarm-redist.github.io/posts/2021-05-28-census-das/Harvard-DAS-Evaluation.pdf">working paper of Kenny et al.</a> on the impact of the 2020 U.S. Census Disclosure Avoidance System (DAS) on redistricting.</p>
<p>This paper makes a <a href="https://github.com/frankmcsherry/blog/blob/master/posts/2016-06-14.md">common but serious mistake</a>, from which the authors wrongfully conclude the Census Bureau should not modernize its privacy-protection technology. Not only do the results not support this conclusion, but they instead show the power of the methodology, known as differential privacy, adopted by the Bureau, precisely the opposite of the authors’ erroneous conclusions.</p>
<p>Trust is essential; once destroyed it can be nearly impossible to rebuild, and getting privacy wrong in this Census will have an impact on all future government surveys. The Census Bureau has shown that their <a href="https://desfontain.es/privacy/index.html">2010 (DAS) does not survive modern privacy threats</a>, and in fact was roughly equivalent to publishing nearly three quarters of the responses. The Census Bureau’s decision to modernize its Disclosure Avoidance System (DAS) for the 2020 Decennial Census to be differentially private is the correct response to decades of theoretical and empirical work on the privacy risks inherent in releasing large numbers of statistics derived from a dataset.</p>
<p>The importance of the Census, and the reality that no technology competing with differential privacy exists for meeting their confidentiality obligations, makes it very important that the public and policy makers have accurate information. We imagine you will be reporting on this topic in the future. Others have <a href="https://gerrymander.princeton.edu/DAS-evaluation-Kenny-response">addressed flaws</a> in the paper regarding implications for redistricting; we want to provide you with an understanding of the privacy mistake in the study.</p>
<p>To understand the flaw in the paper’s argument, consider the role of smoking in determining cancer risk. Statistical study of medical data has taught us that smoking causes cancer. Armed with this knowledge, if we are told that 40 year old Mr. S is a smoker, we can conclude that he has an elevated cancer risk. The statistical inference of elevated cancer risk—made before Mr. S was born—did not violate Mr. S’s privacy. To conclude otherwise is to define science to be a privacy attack. This is the mistake made in the paper.</p>
<p>This is basically what Kenny et al. found.</p>
<p>The authors looked at three different predictors: one built directly from (swapped) 2010 Census data and the other two built using differential privacy applied to (swapped) 2010 Census data, and evaluated all three “on approximately 5.8 million registered voters included in the North Carolina February 2021 voter file.” What did they find?</p>
<blockquote>
<p>“Our analysis shows that across three main racial and ethnic groups, the predictions based on the [differential privacy based] DAS data appear to be as accurate as those based on the 2010 Census data.”</p>
</blockquote>
<p>This makes perfect sense. Bayesian Improved Surname Geocoding, or BISG, is a statistical method of building a predictor inferring ethnicity (or race) from name and geography. Here, name and geography play the role of the information as to whether or not one smokes, and the prediction of ethnicity corresponds to the cancer risk prediction. The predictor is constructed from census data on the ethnic makeup of individual census blocks and statistical information about the popularity of individual surnames within different ethnic groups. With such a predictor, moving across the country can change the outcome, as can changing one’s name. But a BISG prediction is not about the individual, it is about the statistical—population-level—relationship between name, geography, and ethnicity.</p>
<p>The differentially private DAS enabled learning to make statistical inferences about ethnicity from name and geography, without compromising the privacy of any Census respondent, exactly as it was intended to do. In other words, the paper establishes fitness-for-use of the DAS data for the BISG statistical method! Because differential privacy permits learning statistical patterns without compromising the privacy of individual members of the dataset, it should not interfere with learning the predictor, which is exactly what the authors found. Returning to our “smoking causes cancer” example, the researchers found that it was just as easy to detect this statistical pattern with a modern disclosure avoidance system in place as it was with the older, less protective system.</p>
<p>The authors’ conclusions –“ the DAS data may not provide universal privacy protection” – are simply not supported by their findings.</p>
<p>They have confused learning that smoking causes cancer—and applying this predictor to an individual smoker—with learning medical details of individual patients in the dataset. Change the input to the predictor—replace “smoker” with “non-smoker” or move across the country, for example—and the prediction changes.</p>
<p>The BISG prediction is not about the individual, it does not accompany her as she relocates from one neighborhood to another, it is a statistical relationship between name, geography, and ethnicity. It is not a privacy compromise, it is science.</p>
<p>Signed:</p>
<ul>
<li>Mark Bun, Assistant Professor of Computer Science, Boston University</li>
<li>Damien Desfontaines, Privacy Engineer, Google</li>
<li>Cynthia Dwork, Professor of Computer Science, Harvard University</li>
<li>Moni Naor, Professor of Computer Science, The Weizmann Institute of Science</li>
<li>Kobbi Nissim, Professor of Computer Science, Georgetown University</li>
<li>Aaron Roth, Professor of Computer and Information Science, University of Pennsylvania</li>
<li>Adam Smith, Professor of Computer Science, Boston University</li>
<li>Thomas Steinke, Research Scientist, Google</li>
<li>Jonathan Ullman, Assistant Professor of Computer Science, Northeastern University</li>
<li>Salil Vadhan, Professor of Computer Science and Applied Mathematics, Harvard University</li>
</ul>
<p>Please contact Cynthia Dwork for contact information for authors happy to speak about this on the record.</p>
Jonathan UllmanThu, 03 Jun 2021 18:30:00 -0500
https://differentialprivacy.org/inference-is-not-a-privacy-violation/
https://differentialprivacy.org/inference-is-not-a-privacy-violation/Call for Papers - Workshop on the Theory and Practice of Differential Privacy (TPDP 2021)<p>Work on differential privacy spans a number of different research communities, including theoretical computer science, machine learning, statistics, security, law, databases, cryptography, programming languages, social sciences, and more.
Each of these communities may choose to publish their work in their own community’s venues, which could result in small groups of differential privacy researchers becoming isolated.
To alleviate these issues, we have the Workshop on the <a href="https://tpdp.journalprivacyconfidentiality.org/">Theory and Practice of Differential Privacy</a> (TPDP), which is intended to bring these subcommunities together under one roof (well, a virtual one at least for 2020 and 2021).</p>
<p>We have just posted the <a href="https://tpdp.journalprivacyconfidentiality.org/2021/TPDP2021CfP.pdf">Call for Papers</a> for <a href="https://tpdp.journalprivacyconfidentiality.org/2021/">TPDP 2021</a>, which will be a workshop affiliated with <a href="https://icml.cc/Conferences/2021/">ICML 2021</a>.
The submission deadline is Friday, May 28, 2021, Anywhere on Earth (conveniently, two days after the deadline for NeurIPS 2021).
Submissions are extended abstracts of up to four pages in length, and will undergo a lightweight review process, based mostly on relevance and interest to the differential privacy community.
The workshop is non-archival, so feel free to submit recent work at any stage of publication.
Submissions will be on <a href="https://openreview.net/group?id=ICML.cc/2021/Workshop/TPDP">OpenReview</a>, but since submitted work may be preliminary, the process will be “closed” similar to traditional review processes.
One goal of the workshop is to be inclusive and welcoming to newcomers to the differential privacy community, so please consider participating even if you are new to the field.</p>
<p>Most papers will be presented as posters at a (virtual) poster session, while a few papers will be selected for spotlight talks.
There will also be plenary talks by <a href="https://www.cs.huji.ac.il/~katrina/">Katrina Ligett</a> (Hebrew University of Jerusalem) and <a href="https://www2.math.upenn.edu/~ryrogers/">Ryan Rogers</a> (LinkedIn).
The program co-chairs are <a href="https://sites.gatech.edu/rachel-cummings/">Rachel Cummings</a> and <a href="http://www.gautamkamath.com/">myself</a>.
Please submit your best work on differential privacy, and hope to see you there!</p>
<p align="center">
<img src="/images/Ligett.png" />
<img src="/images/Rogers.png" /> <br />
<i>Invited speakers Katrina Ligett and Ryan Rogers</i>
</p>
Gautam KamathWed, 05 May 2021 10:00:00 -0400
https://differentialprivacy.org/tpdp21-cfp/
https://differentialprivacy.org/tpdp21-cfp/ALT Highlights - An Equivalence between Private Learning and Online Learning (ALT '21 Tutorial)<p>Welcome to ALT Highlights, a series of blog posts spotlighting various happenings at the recent conference <a href="http://algorithmiclearningtheory.org/alt2021/">ALT 2021</a>, including plenary talks, tutorials, trends in learning theory, and more!
To reach a broad audience, the series will be disseminated as guest posts on different blogs in machine learning and theoretical computer science.
Given the topic of this post, we felt <a href="https://differentialprivacy.org/">DifferentialPrivacy.org</a> was a great fit.
This initiative is organized by the <a href="https://www.let-all.com/">Learning Theory Alliance</a>, and overseen by <a href="http://www.gautamkamath.com/">Gautam Kamath</a>.
All posts in ALT Highlights are indexed on the official <a href="https://www.let-all.com/blog/2021/04/20/alt-highlights-2021/">Learning Theory Alliance blog</a>.</p>
<p>The second post is coverage of <a href="http://www.cs.technion.ac.il/~shaymrn">Shay Moran</a>’s <a href="https://www.youtube.com/watch?v=wk910Aj559A">tutorial</a>, by <a href="https://people.eecs.berkeley.edu/~kush/">Kush Bhatia</a> and <a href="https://web.stanford.edu/~mglasgow/">Margalit Glasgow</a>.</p>
<hr />
<p>The tutorial at ALT was given by <a href="http://www.cs.technion.ac.il/~shaymrn/">Shay Moran</a>, an assistant professor at the Technion in Haifa.
His <a href="https://www.youtube.com/watch?v=wk910Aj559A">talk</a> focused on recent results showing a deep connection between two important areas in learning theory: online learning and differentially private learning.
While online learning is a well-established area that has been studied since the invention of the Perceptron algorithm in 1954, differential privacy (DP) - introduced in the seminal work of Dwork, McSherry, Nissim, and Smith in 2006 <a href="https://journalprivacyconfidentiality.org/index.php/jpc/article/view/405"><strong>[DMNS06]</strong></a> - has received increasing attention in recent years from both theoretical and applied research communities along with industry and the government.
This recent interest in differential privacy comes from a need to protect the privacy rights of the individuals, while still allowing one to derive useful conclusions from the dataset.
Shay, in a sequence of papers with co-authors Noga Alon, Mark Bun, Roi Livni, and Maryanthe Malliaris, revealed a surprising qualitative connection between these two models of learning: a concept class can be learned in an online fashion if and only if this concept class can be learned in an offline fashion by a differentially private algorithm.</p>
<p>The main objectives of this tutorial were to give an in-depth foray into this recent work, and present an opportunity to young researchers to identify interesting research problems at the intersection of these two fields. This line of work (<a href="https://arxiv.org/abs/1806.00949"><strong>[ALMM19]</strong></a>, <a href="https://arxiv.org/abs/2003.00563"><strong>[BLM20]</strong></a>) - which has been primarily featured in general CS Theory conferences, including recently winning a best paper award at FOCS - introduced several new techniques which could be of use to the machine learning theory community. The first part of this article focuses on the technical challenges of characterizing DP learnability and the solutions used in Shay’s work, which originate from combinatorics and model theory. In the rest of this article, we highlight some of the exciting open directions Shay sees more broadly in learning theory.</p>
<h2 id="background-pac-learning-online-learning-and-dp-pac-learning">Background: PAC Learning, Online Learning, and DP PAC Learning</h2>
<p>We begin by reviewing the classical setting of PAC learning.
The goal of PAC learning is to learn some function from a <em>concept class</em> \(\mathcal{H} \), a set of functions from some domain \( \mathcal{X} \) to \( \{0, 1\} \).
<img align="right" src="/images/PACLearning.png" style="width:300px;height:300px;" />
In the realizable setting of PAC learning, which we will focus on here, the learner is presented with \( n \) labeled training samples \( \{(x_i, y_i)\}_{i=1}^{n} \) where \( x_i \in \mathcal{X} \) and \( y_i = h(x_i) \) for some function \( h \in \mathcal{H} \). While the learner knows the concept class \( \mathcal{H} \), the function \( h \) is unknown, and after seeing the \( n \) samples, the learner algorithm \( A \) must output some function \( \hat{h}: \mathcal{X} \rightarrow \{0, 1\} \). A concept class is <em>PAC-learnable</em> if there exists an algorithm \( A \) such that for any distribution over samples, as the number of labeled training samples goes to infinity, the probability that \( \hat{h} \) incorrectly labels a new random sample goes to 0. We will say that \( A \) is a <em>proper</em> learner if \( A \) outputs a function in \(\mathcal{H}\), while \( A \) is <em>improper</em> if it may output a function outside of \( \mathcal{H} \). More quantitative measures of learnability concern the exact number of samples \( n \) needed for the learner to correctly predict future labels with nontrivial probability.</p>
<p>DP PAC learning imposes an additional restriction on PAC learning: the learner must output a function which does not reveal too much about any one sample in the input. Formally, we say an algorithm \( A \) is \( (\varepsilon, \delta) \)-DP if for any two neighboring inputs \( X = (X_1, \cdots, X_i, \cdots, X_n) \) and \( X’ = (X_1, \cdots, X_i’,\cdots, X_n) \) which differ at exactly one sample, for any set \( S \), \( \Pr[A(X) \in S] \leq e^\varepsilon Pr[A(X’) \in S] + \delta \) . A concept class is <em>DP PAC learnable</em> if it can be PAC-learned by a \( (0.1, o(1/n)) \)-DP algorithm \( A \). That is, changing any one training sample \( (x_i, y_i) \) should not affect the distribution over concepts output by \( A \) by too much.</p>
<p>Online learning considers a setting where the samples arrive one-by-one, and the learner must make predictions as this process unfolds. <img align="right" src="/images/OnlineLearning.png" style="width:300px;height:300px;" />Formally, in realizable online learning, at each round \( i = 1,\dots,T \), the learner is presented with a sample \( x_i\). The learner must predict the label, and then the true label \( y_i \) is revealed to the learner. The sequence of examples \( x_i \) and the labels \( y_i \) may be chosen adversarially, but they must be consistent with some function \( h \in \mathcal{H} \). The goal of the learner is to minimize the total number of mistakes made by round \( T \), also called the <em>mistake bound</em>. If there is a learner such that at \( T \rightarrow \infty \), the number of mistakes is \( o(T) \), we say that the class of functions \( \mathcal{H} \) is <em>online learnable</em>. Because of the adversarial nature of the examples, online learning is well known to be much harder than PAC learning, and is possible precisely when the <em>Littlestone Dimension</em> of the concept class is finite. Figure 1 below illustrates the definition of the Littlestone Dimension. One important PAC-learnable concept class which is not online learnable is the infinite class of thresholds on \( \mathbb{R} \): The set of functions \(\{h_t\}_{t \in \mathbb{R}} \) where \( h_t(x) = \textbf{1}(x > t) \).</p>
<p><img src="/images/LD.png" alt="Figure 1: Littlestone Dimension" title="Figure 1: Littlestone Dimension" /></p>
<p><strong>Figure 1</strong>: The Littlestone dimension is the maximum depth of a tree shattered by \( \mathcal{H} \), where each node may contain a unique element from \(\mathcal{X}\). The tree is shattered if each leaf can be labeled by a function in \(\mathcal{H} \) that labels each element on the path to the root according to the value of the edge entering it. In this figure, we show how the set of thresholds on \( [0, 1] \) shatters this tree of depth 3.</p>
<p>It’s worth taking a moment to understand intuitively why learning thresholds might be hard for both an online learner and a DP learner. In an online setting, suppose the adversary chooses the next example to be any value of \( x \) in between all previously 0-labeled examples and all 1-labeled examples. Then no matter what label \( A \) chooses, the adversary can say \( A \) was incorrect. In a DP setting, a simple proper learner that outputs a threshold that makes no errors on the training data will reveal too much information about the samples at the boundary of 0-labeled samples and 1-labeled samples.</p>
<h2 id="a-challenging-question">A Challenging Question</h2>
<p>The journey to characterizing DP PAC learnability started soon after the introduction of DP, in <a href="https://arxiv.org/abs/0803.0924"><strong>[KLNRS08]</strong></a>, where the concept of DP PAC learning was introduced. This work primarily considered the more stringent <em>pure</em> DP-learning, where \( \delta = 0\). This work established that any finite concept class could be learned privately by applying the <em>exponential mechanism</em> (a standard technique in DP) to the empirical error of each candidate concept. Using the exponential mechanism, the learner could output each function \( h \in \mathcal{H}\) with probability proportional to \( \exp(- \varepsilon(\# \text{ of errors h on the training data})) \). This was sufficiently private because the empirical error is not too sensitive to a change of one training sample. Later works <a href="https://arxiv.org/abs/1402.2224"><strong>[BNS14]</strong></a> combinatorially characterized the complexity of pure DP PAC learning in terms of a measure called <em>representation dimension</em>, showing that in some cases where PAC learning was possible, pure DP PAC learning was not. <a href="https://arxiv.org/abs/1504.07553"><strong>[BNSV15]</strong></a> further yielded lower bounds on the limits of proper DP PAC learning.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup><br />
Both pure and proper DP learning though are significantly more stringent than improper DP learning, and proving lower bounds against an improper DP learner posed a serious challenge. Even the following simple-sounding question was unsolved:</p>
<blockquote>
<p>Can an improper DP algorithm learn the infinite class of thresholds over [0, 1]? (*)</p>
</blockquote>
<p>This question stands at the center of Shay’s work, which unfolded while Shay was residing at the Institute for Advanced Study (IAS) at Princeton from 2017 to 2019.<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> At the time, Shay was working on understanding the expressivity of limited mutual-information algorithms, that is, algorithms which expose little information about the total input. “If we were to have directly worked on this problem, I believe we wouldn’t have solved it,” Shay says. Instead, they came from the angle of mutual-information, a concept qualitatively similar to DP, but armed with a rich toolkit from 70 years of information theory. One of Shay’s prior works with Raef Bassily, Ido Nachum, Jonathan Shafer, and Amir Yehudayof established lower bounds on the mutual information of any proper algorithm that learns thresholds <a href="https://arxiv.org/abs/1710.05233"><strong>[BMNSY18]</strong></a>, though this didn’t yet address the challenge presented by improper learners.</p>
<p>Unlike most lower bounds in theoretical computer science, proving hardness of learning the infinite class of thresholds on the line against an improper DP algorithm would require coming up with algorithm-specific distributions over samples. That is, instead of showing that one distribution over samples would be impossible to learn for all algorithms — in the way the a uniform distribution over the set in \(\mathcal{X}\) shattered by the concept class is hard to PAC-learn for any algorithm — they would have to come up with a distribution on \(\mathcal{X}\) specific to each candidate learning algorithm. Indeed, if the distribution was known to the learner, it was possible to devise a DP algorithm using the exponential mechanism (again!) which could learn any PAC-learnable concept class. Similar to the case of finite concept classes, here we can apply the exponential mechanism to some finite set of representative functions forming a cover of the concept class.</p>
<h2 id="uncovering-a-solution">Uncovering a Solution</h2>
<p>At the IAS, Shay met his initial team to tackle this obstacle: Noga Alon, a combinatorialist, Roi Livni, a learning theorist, and Maryanthe Malliaris, a model theorist. Shay and Maryanthe would walk together from IAS to their combinatorics class at Princeton taught by Noga and discuss mathematics. While Marayathe studied the abstract mathematical field of model theory, Shay and Maryanthe soon noticed a connection between model theory and machine learning: the Littlestone dimension. “There is applied math, then pure math, and then model theory is way over there,” Shay elaborates. “If theoretical machine learning is between applied math and pure math, model theory is on the other extreme.” This surprising interdisciplinary connection led them to use a result from model theory: a concept class had finite Littlestone Dimension precisely when the concept class had finite threshold dimension (formally, the maximum number of threshold functions that could be embedded in the class). This meant that answering (*) negatively was enough to show that DP PAC learning was as hard as online learning.</p>
<p>The idea for showing a lower bound for improper DP learning of thresholds came from Ramsey Theory, a famous area in combinatorics. Ramsey theory guarantees the existence of structured subsets among large, but arbitrary, combinatorial objects. A toy example of Ramsey Theory is that in any graph on 6 or more nodes, there must be a group of 3 nodes which form either a clique or an independent set. In the case of DP learning thresholds, the learning algorithm \( A \) is the arbitrary (and unstructured) combinatorial object. Ramsey Theory guarantees that for any PAC learner \( A \), there exists some large subset \( \mathcal{X}’ \) of the domain \(\mathcal{X}=[0, 1]\) on which \( A \) is close to behaving “normally”. The next step is to show that behaving normally contradicts differentially privacy. We’ll dig more technically into this argument in the next couple paragraphs. Note that we invent a couple definitions for the sake of exposition (“proper-normal” and “improper-normal”) that don’t appear in <a href="https://arxiv.org/abs/1504.07553"><strong>[BNSV15]</strong></a> or <a href="https://arxiv.org/abs/1806.00949"><strong>[ALMM19]</strong></a>.</p>
<p>Let’s start by seeing how a simpler version of this argument from <a href="https://arxiv.org/abs/1504.07553"><strong>[BNSV15]</strong></a> works to show that proper DP algorithms cannot learn thresholds. Recall that in a proper algorithm, after seeing \( n \) labeled samples, \( A \) must output a threshold. To be a PAC learner, if exactly half of the \( n \) samples are labeled \( 1 \) (which we will call a <em>balanced</em> sample), \( A \) must output a threshold between the smallest and largest sample with constant probability. Indeed, otherwise the empirical error of \( A \) will be too large to even hope to generalize. This implies that for a balanced list of samples \( S = [(x_1, 0), \ldots ,(x_{n/2}, 0), (x_{n/2+1}, 1), \ldots ,(x_n, 1)] \) (ordered by \(x\)-value), there must exist an integer \( k \in [n] \) for which with probability \( \Omega(1/n) \), \(A \) outputs a threshold in between
\( x_k \) and \( x_{k+1} \). We’ll say that \( A \) is <em>\(k\)-proper-normal</em> on a set \( \mathcal{X}’ \) if for any balanced sample \( S \) of \( n \) points in \( \mathcal{X}’ \), \( A \) outputs a threshold in between the \(k\)th and \(k+1\)th ordered samples with probability \( \Omega(1/n) \). For example, the naive algorithm that always outputs a threshold between the 0-labeled samples and 1-labeled samples is \(n/2\)-normal on the entire domain \([0, 1]\). Ramsey theory guarantees that there is an arbitrary large subset of the domain \([0, 1]\) on which \(A \) is \(k\)-proper-normal for some \(k\). (See Figure 2).</p>
<p><img src="/images/RamseyTheorem.png" alt="Figure 2: Ramsey's Theorem" title="Figure 2: Ramsey's Theorem" />
<strong>Figure 2</strong>: Ramsey’s Theorem guarantees a large subset \( \mathcal{X}’ \) on which \( A \) is \( k \)-proper-normal for some \( k \).</p>
<p>The second part of the argument shows that being \( k \)-proper-normal on a large set is in direct conflict with differential privacy. We show this argument in Figure 3 by constructing a set of \( n \) samples \( S^* \) on which \( A \) must output a threshold in many distinct regions with substantial probability.</p>
<p><img src="/images/DP_Construction.png" alt="Figure 3: A Hard Distribution" title="Figure 2: Ramsey's Theorem" /></p>
<p><strong>Figure 3:</strong> A distribution showing the conflict between \(A \) being DP and \(k\)-proper normal. By DP, the behaviour of \(A\) on \(S^* \) must be similar to its behaviour on \(S_i\) for \(i = 1 \ldots \Omega(n).\) Namely, since \(S^* \) and \( S_i \) differ by at most two points, \( A(S^*) \) must output a threshold in \( I_i \) with probability \( p \geq (q - 2\delta)e^{-(2\varepsilon)} \) for each \( i \), where \( q \) is a lower bound on the probability that \( A(S_i) \) outputs a threshold in \( I_i \). Because \( A \) is \(k\)-proper-normal, \( q > c/n \), so \( p = \Omega(1/n) \). This yields a contradiction for \( m >1/p \) because \( A \) cannot simultaneously output a threshold in two of the intervals \( I_i \).</p>
<p>For the case of improper algorithms, Shay and his coauthors considered an alternative notion of normality, which we will term <em>improper-normal</em>. Recall that in this case the output \( A(S) \) of the learner on a sample \( S \) is <em>any function</em> from \( [0, 1] \) to \( \{0, 1\} \) and not necessarly a threshold. We’ll say that \( A \) is \(k\)-improper-normal on a set \( \mathcal{X}’ \) if for any balanced sample \( S = [(x_1, 0), \ldots, (x_{n/2}, 0), (x_{n/2 +1}, 1), \ldots, (x_n, 1)] \) of \( n \) points in \( \mathcal{X}’, \Pr[A(S)(w) = 1] - \Pr[A(S)(v) = 1] > \Omega(1/n) \) for any \( w \in (x_{k - 1}, x_k) \) and \( v \in (x_k, x_{k + 1}) \) in \( \mathcal{X}’ \setminus \{x_1, … x_n\} \). To PAC learn, A must be \(k\)-improper-normal for some \(k\) on any set of \(n + 1\) points. Applying the same Ramsey Theorem to a graph with colored hyperedges of size \(n + 1\) shows that there must exist an arbitrarily large set \( \mathcal{X}’ \) on which \( A \) is \(k\)-improper normal for some \( k \). A similar (but more nuanced) argument as before shows that a learner cannot be simultaneously private and \(k\)-improper-normal on some distribution over \( \mathcal{X}’ \).</p>
<p>For the last piece of the puzzle, showing the upper bound converse, Mark Bun, an expert in differential privacy at Princeton at the time, joined in. Beginning in the fall of 2019, Mark, Shay, and Roi worked out an upper bound that showed that any class with finite Littlestone dimension could be learned privately. Their technique introduced a new notion of stability, called <em>global stability</em>, which is a property of algorithms that frequently output the same hypothesis. Given a globally stable PAC learner \( A \), to obtain a DP PAC learner, one can run \( A \) many times and produce a histogram of the output hypotheses, add noise to this histogram for the sake of privacy, and then select the most frequent output hypothesis. The construction for the globally stable learner uses the Standard Optimal Algorithm for online learning as a black box - though the reduction is very computationally intensive and results in a sample complexity depending exponentially on the Littlestone Dimension. This reduction was improved in <a href="https://arxiv.org/abs/2012.03893"><strong>[GGKM20]</strong></a>, where the authors gave a reduction requiring only polynomially many samples in the Littlestone dimension.</p>
<h2 id="outlook">Outlook</h2>
<p>Shay mentions that these results are only a first step towards understanding differentially private PAC learning. This work establishes a deep connection between online learning and DP learning, qualitatively showing that these two problems have similar underlying complexity. Recent works <a href="https://arxiv.org/abs/1905.11311"><strong>[GHM19]</strong></a> have gone a step further and established polynomial time reductions from DP learning to online learning under certain conditions. At the same time, Bun <a href="https://arxiv.org/abs/2007.05665"><strong>[B20]</strong></a> demonstrates a computational gap between the two problems: they exhibit a concept class which is DP PAC learnable in polynomial time, but no algorithm can learn it online in polynomial time and sample complexity.</p>
<p>An interesting question here is whether a polynomial time online learning algorithm implies a polynomial time DP learning algorithm? With regards to sample complexity, tighter quantitative bounds relating the sample complexity of DP learning to the Threshold dimension and the Littlestone Dimension, along with constructive reductions from DP learning to online learning are wide open. An interesting conjecture, also highlighted in the talk, is whether DP PAC learning and PAC learning are actually equivalent up to a \(log*\) factor of the Littlestone dimension? Solving this would mean that for most natural function classes, one need not pay a very high price for private learning as compared to PAC learning.</p>
<p>From a more practical perspective, studying such qualitative equivalences can lay the groundwork allowing one to use the vast existing knowledge in the field of online learning to design better algorithms for DP learning, and vice versa. Despite its abstractness, Shay believes this result still has significance for engineers: “If they want to do something differentially privately, if they already have a good online learning algorithm for this, maybe they can modify it,” Shay says. “It gives some kind of inspiration.”</p>
<p>From a broader perspective, Shay believes that discovering clean, beautiful mathematical models that are more realistic than the PAC learning model and understanding the price one pays for privacy in those models are important directions for future research. One model, highlighted in the tutorial by Shay is that of <em>Universal Learning</em>, introduced in a recent work <a href="https://arxiv.org/abs/2011.04483"><strong>[BHMVY21]</strong></a> with Olivier Bousquet, Steve Hanneke, Ramon van Handel and Amir Yehudayoff. This model considers a learning task with a fixed distribution of samples and studies the hardness of the problem as the number of samples \( n \to \infty \). This setup better captures the practical aspects of modern machine learning and overcomes the limitations of the PAC model, which studies worst-case distributions for each sample size.</p>
<p>And how should one go about identifying such better mathematical models? “Pick the simplest problem that you don’t know how to solve and see where that leads you”, says Shay. One should begin by trying to understand the deficiencies of existing learning models, by identifying simple examples which go beyond these existing models. For example, Livni and Moran <a href="https://arxiv.org/abs/2006.13508"><strong>[LM20]</strong></a> exhibit the limitations of PAC-Bayes framework through a simple 1D linear classification problem. Fixing such limitations can often lead one to discover better learning models.</p>
<hr />
<p><em>Thanks to Gautam Kamath, Shay Moran, and Keziah Naggita for helpful conversations and comments.</em></p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>For a more complete background on the progress in DP PAC learning, we refer the reader to the excellent survey blog post <a href="https://differentialprivacy.org/private-pac/">here</a>. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>At the time, Shay was additionally affiliated with Princeton University and Google Brain. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Kush BhatiaMargalit GlasgowMon, 26 Apr 2021 12:00:00 -0400
https://differentialprivacy.org/alt-highlights/
https://differentialprivacy.org/alt-highlights/What is δ, and what δifference does it make?<p>There are many variants or flavours of differential privacy (DP) some weaker than others: often, a given variant comes with own guarantees and “conversion theorems” to the others. As an example, “pure” DP has a single parameter \(\varepsilon\), and corresponds to a very stringent notion of DP:</p>
<blockquote>
<p>An algorithm \(M\) is \(\varepsilon\)-DP if, for all neighbouring inputs \(D,D'\) and all measurable \(S\), \( \Pr[ M(D) \in S ] \leq e^\varepsilon\Pr[ M(D’) \in S ] \).</p>
</blockquote>
<p>By relaxing this a little, one obtains the standard definition of approximate DP, a.k.a. \((\varepsilon,\delta)\)-DP:</p>
<blockquote>
<p>An algorithm \(M\) is \((\varepsilon,\delta)\)-DP if, for all neighbouring inputs \(D,D'\) and all measurable \(S\), \( \Pr[ M(D) \in S ] \leq e^\varepsilon\Pr[ M(D’) \in S ]+\delta \).</p>
</blockquote>
<p>This definition is very useful, as in many settings achieving the stronger \(\varepsilon\)-DP guarantee (i.e., \(\delta=0\)) is impossible, or comes at a very high utility cost. But how to interpret it? The above definition, on its face, doesn’t preclude what one may call “<em>catastrophic failures of privacy</em> 💥:” most of the time, things are great, but with some small probability \(\delta\) all hell breaks loose. For instance, the following algorithm is \((\varepsilon,\delta)\)-DP:</p>
<ul>
<li>Get a sensitive database \(D\) of \(n\) records</li>
<li>Select uniformly at random a fraction \(\delta\) of the database (\(\delta n\) records)</li>
<li>Output that subset of records in the clear 💥</li>
</ul>
<p>(actually, this is even \((0,\delta)\)-DP!). This sounds preposterous, and obviously something that one would want to avoid in practice (lest one wants to face very angry customers or constituents). This is one of the rules of thumb for picking \(\delta\) small enough (or even “cryptographically small”), typically \(\delta \ll 1/n\), so that the records are safe (hard to disclose \(\delta n \ll 1\) records).</p>
<p>So: good privacy most of the time, but with probably \(\delta\) then all bets are off.</p>
<p>However, those catastrophic failure of privacy, while technically allowed by the definition of \((\varepsilon,\delta)\)-DP, <strong>are not something that can really happen with the DP algorithms and techniques used both in practice and in theoretical work.</strong> Before explaining why, let’s see what is the kind of desirable behaviour one would expect: a <em>“smooth, manageable tradeoff of privacy parameters.”</em> For that discussion, let’s introduce the <em>privacy loss random variable</em>: given an algorithm M and two neighbouring inputs D,D’, let \(f(y)\) be defined as
\[
f(y) = \log\frac{\Pr[M(D)=y]}{\Pr[M(D’)=y]}
\]
for every possible output \(y\in\Omega\). Now, define the random variable \(Z := f(M(D))\) (implicitly, \(Z\) depends on \(D,D',M\)). This random variable quantifies how much observing the output of the algorithm \(M\) helps distinguishing between \(D\) and \(D'\).</p>
<p>Now, going a little bit fast, you can check that saying that \(M\) is \(\varepsilon\)-DP corresponds to the guarantee “<em>\(\Pr[Z > \varepsilon] = 0\) for all neighbouring inputs \(D,D'\).</em>”
Similarly, \(M\) being \((\varepsilon,\delta)\)-DP is the guarantee \(\Pr[Z > \varepsilon] \leq \delta\).\({}^{(\dagger)}\) For instance, the “catastrophic failure of privacy” corresponds to the scenario below, which depicts a possible distribution for \(Z\): \(Z\leq \varepsilon\) with probability \(1-\delta\), but then with probability \(\delta\) we have \(Z\gg 1\).</p>
<p><img src="/images/flavours-delta-fig1.png" width="600" alt="The type of (bad) distribution of Z corresponding to 'our catastrophic failure of privacy'" style="margin:auto;display: block;" /></p>
<p>What we would like is a smoother thing, where even when \(Z>\varepsilon\) is still remains reasonable and doesn’t immediately become large. A nice behaviour of the tails, ideally something like this:</p>
<p><img src="/images/flavours-delta-fig2.png" width="600" alt="A distribution for Z with nice tails, leading to smooth tradeoffs between ε and δ" style="margin:auto;display: block;" /></p>
<p>For instance, if we had a bound on \(\mathbb{E}[|Z|]\), we could use Markov’s inequality to get, well, <em>something</em>. For instance, imagine we had \(\mathbb{E}[|Z|]\leq \varepsilon\delta\): then
\[
\Pr[ |Z| > \varepsilon ] \leq \frac{\mathbb{E}[|Z|]}{\varepsilon }\leq \delta
\]
<em>(great! We have \((\varepsilon,\delta)\)-DP)</em>; but also \(\Pr[ |Z| > 10\varepsilon ] \leq \frac{\delta}{10}\). Privacy violations do not blow up out of proporxtion immediately, we can trade \(\varepsilon\) for \(\delta\). That seems like the type of behaviour we would like our algorithms to exhibit.</p>
<p><img src="/images/flavours-delta-fig3.png" width="600" alt="The type of privacy guarantees a Markov-type tail bound would give" style="margin:auto;display: block;" /></p>
<p>But why stop at Markov’s inequality then, which gives some nice but still weak tail bounds? Why not ask for <em>stronger</em>: Chebyshev’s inequality? Subexponential tail bounds? Hell, <em>subgaussian</em> tail bounds? This is, basically, what some stronger notions of differential privacy than approximate DP give.</p>
<ul>
<li>
<p><strong>Rényi DP</strong> <a href="https://arxiv.org/abs/1702.07476" title="Ilya Mironov. Renyi Differential Privacy. CSF 2017"><strong>[Mironov17]</strong></a>, for instance, is a guarantee on the moment-generating function (MGF) of the privacy random variable \(Z\): it has two parameters, \(\alpha>1\) and \(\tau\), and requires that \(\mathbb{E}[e^{(\alpha-1)Z}] \leq e^{(\alpha-1)\tau}\) for all neighbouring \(D,D'\). In turn, by applying for instance Markov’s inequality to the MGF of \(Z\), we can control the tail bounds, and get a nice, smooth tradeoff in terms of \((\varepsilon,\delta)\)-DP.</p>
</li>
<li>
<p><strong>Concentrated DP</strong> (CDP) <a href="https://arxiv.org/abs/1605.02065" title="Mark Bun and Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. TCC 2016"><strong>[BS16]</strong></a> is an even stronger requirement, which roughly speaking requires the algorithm to be Rényi DP <em>simultaneously</em> for all \(1< \alpha \leq \infty\). More simply, this is “morally” a requirement on the MGF of \(Z\) which asks it to be subgaussian.</p>
</li>
</ul>
<p>The above two examples are not just fun but weird variants of DP: they actually capture the behaviour of many well-known differentially private algorithms, and in particular that of the Gaussian mechanism. While the guarantees they provide are less easy to state and interpret than \(\varepsilon\)-DP or \((\varepsilon,\delta)\)-DP, they are incredibly useful to analyze those algorithms, and enjoy very nice composition properties… and, of course, lead to that smooth tradeoff between \(\varepsilon\) and \(\delta\) for \((\varepsilon,\delta)\)-DP.</p>
<p><strong>To summarize:</strong></p>
<ul>
<li>\(\varepsilon\)-DP gives great guarantees, but is a very stringent requirement. Corresponds to the privacy loss random variable supported on \([-\varepsilon,\varepsilon]\) (no tails!)</li>
<li>\((\varepsilon,\delta)\)-DP gives guarantees easy to parse, but on its face allows for very bad behaviours. Corresponds to the privacy loss random variable in \([-\varepsilon,\varepsilon]\) with probability \(1-\delta\) (but outside, all bets are off!)</li>
<li>Rényi DP and Concentrated DP correspond to something in between, controlling the tails of the privacy loss random variable by a guarantee on its MGF. A bit harder to interpret, but capture the behaviour of many DP building blocks can be converted to \((\varepsilon,\delta)\)-DP (with nice trade-offs between \(\varepsilon\) and \(\delta\).</li>
</ul>
<hr />
<p>\({}^{(\dagger)}\) The astute reader may notice that this is not <em>quite</em> true. Namely, the guarantee \(\Pr[Z > \varepsilon] \leq \delta\) on the privacy loss random variable (PLRV) does imply \((\varepsilon,\delta)\)-differential privacy, but the converse does not hold. See, for instance, Lemma 9 of <a href="https://arxiv.org/abs/2004.00010" title="Clément L. Canonne, Gautam Kamath, Thomas Steinke. The Discrete Gaussian for Differential Privacy. NeurIPS 2020"><strong>[CKS20]</strong></a> for an exact characterization of \((\varepsilon,\delta)\)-DP in terms of the PLRV.</p>
Clément CanonneThu, 11 Mar 2021 21:00:00 -0400
https://differentialprivacy.org/flavoursofdelta/
https://differentialprivacy.org/flavoursofdelta/Conference Digest - TPDP 2020<p><a href="https://tpdp.journalprivacyconfidentiality.org/2020/">TPDP 2020</a> is a workshop focused on differential privacy. As such, it’s a great place to learn about recent developments in the DP research community.
It will be held on 13 November and is co-located with <a href="https://www.sigsac.org/ccs/CCS2020/">CCS</a>, but, of course, it’s virtual this year. <a href="https://www.sigsac.org/ccs/CCS2020/registration.html">Registration is only US$35 if you register by Friday, 30 October.</a> Check out the 8 excellent talks and 71 posters below – wow, the workshop has grown!</p>
<p>Please let us know if there are any errors or omissions.</p>
<h2 id="invited-talks">Invited Talks</h2>
<ul>
<li>
<p>OpenDP: A Community Effort to Build Trustworthy Differential Privacy Software.<br />
<a href="https://salil.seas.harvard.edu/">Salil Vadhan</a></p>
</li>
<li>
<p>Implementation with Base-2 DP or: How I learned to stop worrying and love floating point.
<a href="https://cilvento.org/">Christina Ilvento</a></p>
</li>
</ul>
<h2 id="contributed-talks">Contributed Talks</h2>
<ul>
<li>
<p><a href="https://arxiv.org/abs/2009.09052">Private Reinforcement Learning with PAC and Regret Guarantees</a><br />
Giuseppe Vietri, Borja Balle, Akshay Krishnamurthy, Z. Steven Wu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.07709">Auditing Differentially Private Machine Learning: How Private is Private SGD?</a><br />
Matthew Jagielski, Jonathan Ullman, Alina Oprea</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.06783">Characterizing Private Clipped Gradient Descent on Convex Generalized Linear Problems</a><br />
Shuang Song, Om Thakkar, Abhradeep Thakurta</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2004.10941">Private Query Release Assisted by Public Data</a><br />
Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, Z. Steven Wu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2003.00563">An Equivalence Between Private Classification and Online Prediction</a><br />
Mark Bun, Roi Livni, Shay Moran</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.09745">Differentially Private Set Union</a><br />
Sivakanth Gopi, Pankaj Gulhane, Janardhan Kulkarni, Judy Hanwen Shen, Milad Shokouhi, Sergey Yekhanin</p>
</li>
</ul>
<h2 id="posters">Posters</h2>
<ul>
<li>
<p><a href="https://arxiv.org/abs/2004.00010">The Discrete Gaussian for Differential Privacy</a><br />
Clément Canonne, Gautam Kamath, Thomas Steinke</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.09465">Locally Private Hypothesis Selection</a><br />
Sivakanth Gopi, Gautam Kamath, Janardhan Kulkarni, Aleksandar Nikolov, Z. Steven Wu, Huanyu Zhang</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.05839">LinkedIn’s Audience Engagements API: A Privacy Preserving Data Analytics System at Scale</a><br />
Ryan Rogers, Subbu Subramaniam, Sean Peng, David Durfee, Seunghyun Lee, Santosh Kumar Kancha, Shraddha Sahay, Parvez Ahammad</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2005.14717">Differentially Private Decomposable Submodular Maximization</a><br />
Anamay Chaturvedi, Huy Nguyen, Lydia Zakynthinou</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.11934">Private Post-GAN Boosting</a><br />
Marcel Neunhoeffer, Z. Steven Wu, Cynthia Dwork</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.01100">Efficient, Noise-Tolerant, and Private Learning via Boosting</a><br />
Marco Carmosino, Mark Bun, Jessica Sorrell</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2003.04509">Closure Properties for Private Classification and Online Prediction</a><br />
by Noga Alon, Amos Beimel, Shay Moran, Uri Stemmer</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.12018">Overlook: Differentially Private Exploratory Visualization for Big Data</a><br />
Pratiksha Thaker, Mihai Budiu, Parikshit Gopalan, Udi Wieder, Matei Zaharia</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.01980">On the Equivalence between Online and Private Learnability beyond Binary Classification</a><br />
Young Hun Jung, Baekjin Kim and Ambuj Tewari</p>
</li>
<li>
<p><a href="https://dettanym.github.io/files/tpdp20_workshop_paper.pdf">Cache Me If You Can: Accuracy-Aware Inference Engine for Differentially Private Data Exploration</a><br />
Miti Mazmudar, Thomas Humphries, Matthew Rafuse, Xi He</p>
</li>
<li>
<p><a href="https://drops.dagstuhl.de/opus/volltexte/2020/12026/">Bounded Leakage Differential Privacy</a><br />
Katrina Ligett, Charlotte Peale, Omer Reingold</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1909.06322">A Knowledge Transfer Framework for Differentially Private Sparse Learning</a><br />
Lingxiao Wang, Quanquan Gu</p>
</li>
<li>
<p>Consistent Integer, Non-Negative, Hierarchical Histograms without Integer Programming<br />
Cynthia Dwork, Christina Ilvento</p>
</li>
<li>
<p><a href="https://www.microsoft.com/en-us/research/uploads/prod/2020/03/intrinsic_privacy_tpdp.pdf">An Empirical Study on the Intrinsic Privacy of Stochastic Gradient Descent</a><br />
Stephanie Hyland, Shruti Tople</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2001.03618">Encode, Shuffle, Analyze Privacy Revisited: Formalizations and Empirical Evaluation</a><br />
Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Shuang Song, Kunal Talwar, Abhradeep Thakurta</p>
</li>
<li>
<p>Improving Sparse Vector Technique with Renyi Differential Privacy<br />
Yuqing Zhu and Yu-Xiang Wang</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.11707">Breaking the Communication-Privacy-Accuracy Trilemma</a><br />
Wei-Ning Chen, Peter Kairouz, Ayfer Özgür</p>
</li>
<li>
<p>Budget Sharing for Multi-Analyst Differential Privacy<br />
David Pujol, Yikai Wu, Brandon Fain, Ashwin Machanavajjhala</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1908.07643">AdaCliP: Adaptive Clipping for Private SGD</a><br />
Venkatadheeraj Pichapati, Ananda Theertha Suresh, Felix X. Yu, Sashank J. Reddi, Sanjiv Kumar</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.01181">Private Optimization Without Constraint Violation</a><br />
Andrés Muñoz Medina, Umar Syed, Sergei Vassilvitskii, Ellen Vitercik</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1912.06015">Efficient Per-Example Gradient Computations in Convolutional Neural Networks</a><br />
Gaspar Rochette, Andre Manoel, Eric Tramel</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.12674">Controlling Privacy Loss in Survey Sampling</a><br />
Audra McMillan, Mark Bun, Marco Gaboardi, Joerg Drechsler</p>
</li>
<li>
<p>Privacy-Preserving Community Detection under the Stochastic Block Model<br />
Jonathan Hehir, Aleksandra Slavkovic, Xiaoyue Niu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.10129">Smoothed Analysis of Differentially Private and Online Learning</a><br />
Nika Haghtalab, Tim Roughgarden, Abhishek Shetty</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.09464">Private Mean Estimation for Heavy-Tailed Distributions</a><br />
Gautam Kamath, Vikrant Singhal, Jonathan Ullman</p>
</li>
<li>
<p><a href="https://link.springer.com/chapter/10.1007%2F978-3-030-57521-2_23">Private Posterior Inference Consistent with Public Information: a Case Study in Small Area Estimation from Synthetic Census Data</a><br />
Jeremy Seeman, Aleksandra Slavkovic, Matthew Reimherr</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2001.09122">Reasoning About Generalization via Conditional Mutual Information</a><br />
Thomas Steinke, Lydia Zakynthinou</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.15429">Understanding Gradient Clipping in Private SGD: A Geometric Perspective</a><br />
Xiangyi Chen, Z. Steven Wu, Mingyi Hong</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.06605">Privacy Amplification via Random Check-Ins</a><br />
Borja Balle, Peter Kairouz, Brendan McMahan, Om Thakkar, Abhradeep Thakurta</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2010.12603">Permute-and-flip: a new mechanism for differentially-private selection</a><br />
Ryan McKenna, Daniel Sheldon</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.02923">Descent-to-Delete: Gradient-Based Methods for Machine Unlearning</a><br />
Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2008.06529">A Better Bound Gives a Hundred Rounds: Enhanced Privacy Guarantees via f-Divergences</a><br />
Shahab Asoodeh, Jiachun Liao, Flavio Calmon, Oliver Kosut, Lalitha Sankar</p>
</li>
<li>
<p><a href="https://sites.tufts.edu/vrdi/files/2020/07/Slides-DP-Bhushan-Suwal-JN-Matthews-et-al.pdf">Census TopDown and the Redistricting Use Case</a><br />
Aloni Cohen, Moon Duchin, JN Matthews, Bhushan Suwal, Peter Wayner</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1911.04014">Interaction is Necessary for Distributed Learning with Privacy or Communication Constraints</a><br />
Yuval Dagan, Vitaly Feldman</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2005.10783">Fisher information under local differential privacy</a><br />
Leighton Barnes, Wei-Ning Chen, Ayfer Ozgur</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2004.06830">Differentially Private Assouad, Fano, and Le Cam</a><br />
Jayadev Acharya, Ziteng Sun, Huanyu Zhang</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.13660">Learning discrete distributions: user vs item-level privacy</a><br />
Yuhan Liu, Ananda Theertha Suresh, Felix Yu, Sanjiv Kumar, Michael Riley</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1910.13659">Efficient Privacy-Preserving Stochastic Nonconvex Optimization</a><br />
Lingxiao Wang, Bargav Jayaraman, David Evans, Quanquan Gu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.03813">Bypassing the Ambient Dimension: Private SGD with Gradient Subspace Identification</a><br />
Yingxue Zhou, Zhiwei Steven Wu, Arindam Banerjee</p>
</li>
<li>
<p>Differentially private partition selection<br />
Damien Desfontaines, Bryant Gipson, Chinmoy Mandayam, James Voss</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2008.08007">Differentially Private Clustering: Tight Approximation Ratios</a><br />
Badih Ghazi, Ravi Kumar, Pasin Manurangsi</p>
</li>
<li>
<p>Let’s not make a fuzz about it<br />
Elisabet Lobo Vesga, Alejandro Russo, Marco Gaboardi</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2002.12321">PAPRIKA: Private Online False Discovery Rate Control</a><br />
Wanrong Zhang, Gautam Kamath, Rachel Cummings</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2009.13689">Oblivious Sampling Algorithms for Private Data Analysis</a><br />
Sajin Sasy, Olga Ohrimenko</p>
</li>
<li>
<p>SOGDB-epsilon: Secure Outsourced Growing Database with Differentially Private Record Update<br />
Chenghong Wang, Kartik Nayak, Ashwin Machanavajjhala</p>
</li>
<li>
<p><a href="https://invertibleworkshop.github.io/accepted_papers/pdfs/41.pdf">Differentially Private Normalizing Flows for Privacy-Preserving Density Estimation</a><br />
Chris Waites, Rachel Cummings</p>
</li>
<li>
<p><a href="https://cs.uwaterloo.ca/~hsivasub/pub/TPDP2020.pdf">Differentially Private Sublinear Average Degree Approximation</a><br />
Harry Sivasubramaniam, Haonan Li, Xi He</p>
</li>
<li>
<p><a href="https://chong-l.github.io/MAPL_TNC_FL_ICML_2020.pdf">Revisiting Model-Agnostic Private Learning: Faster Rates and Active Learning</a><br />
Chong Liu, Yuqing Zhu, Kamalika Chaudhuri, Yu-Xiang Wang</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.06618">CoinPress: Practical Private Mean and Covariance Estimation</a><br />
Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2004.09481">Connecting Robust Shuffle Privacy and Pan-Privacy</a><br />
Victor Balcer, Albert Cheu, Matthew Joseph, Jieming Mao</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.11204">Differentially Private Variational Autoencoders with Term-wise Gradient Aggregation</a><br />
Tsubasa Takahashi, Shun Takagi, Hajime Ono, Tatsuya Komatsu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.07490">Understanding Unintended Memorization in Federated Learning</a><br />
Om Thakkar, Swaroop Ramaswamy, Rajiv Mathews, Francoise Beaufays</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2005.10630">Near Instance-Optimality in Differential Privacy</a><br />
Hilal Asi, John Duchi</p>
</li>
<li>
<p><a href="https://drive.google.com/file/d/1okHAkjNENiS2WfSKdkUo8B29yE8-Qfof/view">Implementing differentially private integer partitions via the exponential mechanism</a> and <a href="https://drive.google.com/file/d/1OytgB24d1n-xPIWrrKCsVQQdS7rV3tjn/view">Implementing Sparse Vector</a><br />
Christina Ilvento</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.05157">Differentially Private Simple Linear Regression</a><br />
Audra McMillan, Daniel Alabi, Jayshree Sarathy, Adam Smith, Salil Vadhan</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.05453">New Oracle-Efficient Algorithms for Private Synthetic Data Release</a><br />
Giuseppe Vietri, Grace Tian, Mark Bun, Thomas Steinke, Z. Steven Wu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.07749">General-Purpose Differentially-Private Confidence Intervals</a><br />
Cecilia Ferrando, Shufan Wang, Daniel Sheldon</p>
</li>
<li>
<p>Central Limit Theorem and Uncertainty Principles for Differentially Private Query Answering<br />
Jinshuo Dong, Linjun Zhang, Weijie Su</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.13501">Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds</a><br />
Yingxue Zhou, Xiangyi Chen, Mingyi Hong, Z. Steven Wu, Arindam Banerjee</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1905.10335">Minimax Rates of Estimating Approximate Differential Privacy</a><br />
Xiyang Liu, Sewoong Oh</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2004.07740">Really Useful Synthetic Data – A Framework to Evaluate the Quality of Differentially Private Synthetic Data</a><br />
Christian Arnold, Marcel Neunhoeffer</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/1911.10541">PAC learning with stable and private predictions</a><br />
Yuval Dagan, Vitaly Feldman</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2004.04656">Computing Local Sensitivities of Counting Queries with Joins</a><br />
Yuchao Tao, Xi He, Ashwin Machanavajjhala, Sudeepa Roy</p>
</li>
<li>
<p>Efficient Reductions for Differentially Private Multi-objective Regression<br />
Julius Adebayo, Daniel Alabi</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2010.06667">The Pitfalls of Differentially Private Prediction in Healthcare</a><br />
Vinith Suriyakumar, Nicolas Papernot, Anna Goldenberg, Marzyeh Ghassemi</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2007.14191">Tempered Sigmoid Activations for Deep Learning with Differential Privacy</a><br />
Nicolas Papernot, Abhradeep Thakurta, Shuang Song, Steve Chien, Úlfar Erlingsson</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2005.10881">Revisiting Membership Inference Under Realistic Assumptions</a><br />
Bargav Jayaraman, Lingxiao Wang, David Evans, Quanquan Gu</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2009.04013">Attribute Privacy: Framework and Mechanisms</a><br />
Wanrong Zhang, Olga Ohrimenko, Rachel Cummings</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2010.10664">DuetSGX: Differential Privacy with Secure Hardware</a><br />
Phillip Nguyen, Alex Silence, David Darais, Joseph Near</p>
</li>
<li>
<p><a href="https://pdfs.semanticscholar.org/4319/65b3c5a47cf8bfd30f1c30cd044382e98d68.pdf">A Programming Framework for OpenDP</a><br />
Marco Gaboardi, Michael Hay, Salil Vadhan</p>
</li>
<li>
<p><a href="https://arxiv.org/abs/2006.09352">A One-Pass Private Sketch for Most Machine Learning Tasks</a><br />
Benjamin Coleman, Anshumali Shrivastava</p>
</li>
<li>
<p>Model-Agnostic Private Learning with Domain Adaptation<br />
Yuqing Zhu, Chong Liu, Yu-Xiang Wang</p>
</li>
</ul>
Thomas SteinkeWed, 28 Oct 2020 00:01:00 +0000
https://differentialprivacy.org/tpdp2020/
https://differentialprivacy.org/tpdp2020/Reconstruction Attacks in Practice<p>This is the second of two posts describing the theory and practice of reconstruction attacks. To read the first post, which covers the theoretical basis of such attacks, <a href="https://differentialprivacy.org/reconstruction-theory/">[click here]</a>.</p>
<hr />
<p>In the <a href="https://differentialprivacy.org/reconstruction-theory/">last post</a>, we discussed how an attacker can use noisy answers to questions about a database to reconstruct private information in the database. The reconstruction attack framework was:</p>
<ol>
<li>The attacker submits sufficiently random queries that link prior information (which the attacker already knows) to private data (which the attacker wants to learn).</li>
<li>The attacker receives noisy answers to these queries and writes them down as constraints for a linear program to solve for the private bits.</li>
<li>The attacker solves the linear program and rounds the result to recover most of the bits.</li>
</ol>
<p>Our last post discussed some of this attack’s nice theoretical guarantees, and this post matches that with real-world performance. More specifically, we’ll cover two successful applications of this attack against a piece of anonymizing SQL software called Diffix which, despite the name, is not differentially private.</p>
<h3 id="what-is-diffix">What is Diffix?</h3>
<p>Diffix is a system designed by the startup Aircloak for answering statistical queries over a private database. It is described by its creators as an “anonymizing SQL interface [that] sits in front of your data and enables you to conduct ad hoc analytics — fully privacy preserving and GDPR-compliant.”<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> Aircloak’s approach is to develop targeted defenses for known vulnerabilities, but to otherwise privilege utility over protecting against unknown vulnerabilities. They combine this approach with a serious effort to actually find vulnerabilities in Diffix through periodic bug bounties that offer monetary prizes for participants who mount successful attacks. While this post is critical of the design of Diffix itself, we commend Aircloak for their genuine openness to scrutiny. Indeed, the attacks described in this post were carried out as a part of these bug bounty programs and led to the discovery of several vulnerabilities in the software that have since been addressed. The first attack we describe was carried out by Aloni Cohen and Kobbi Nissim in the first bug bounty program in late 2017 and early 2018. The second was run by Travis Dick, Matthew Joseph, and Zachary Schutzman in the second bug bounty program during the summer of 2020.</p>
<p>Before diving into the details of the attacks, we’ll first introduce the basic functionality of Diffix and how it purports to defend against vulnerabilities, including linear reconstruction attacks. The goal of Diffix is to answer SQL queries, such as:</p>
<pre><code class="language-SQL">SELECT COUNT(*) FROM loans
WHERE loanStatus = 'C'
AND clientId BETWEEN 2000 and 3000
</code></pre>
<p>on a database while preventing the disclosure of record-level data.<br />
A challenge for a system like Diffix is to answer such counting queries while preventing an adversarial user—the attacker—from learning record-level information. As you might remember from the last post, such a system must not provide exact answers to arbitrary queries. Otherwise the attacker could mount a <em>differencing attack</em>. For example, an attacker who knows that Billy Joel’s <code class="language-plaintext highlighter-rouge">clientID</code> is 2744 could learn the status of the singer’s loan by comparing the answer to the previous query with the answer to:</p>
<pre><code class="language-SQL">SELECT COUNT(*) FROM loans
WHERE loanStatus = 'C'
AND clientId BETWEEN 2000 and 3000
AND clientId != 2744
</code></pre>
<p>An intuitive defense is to add noise to the answer—say, Gaussian noise sampled from \(N(0,10)\).
Now the difference \(\Delta\) in the responses to the two queries is a random variable sampled from \(N(1,20)\) or \(N(0,20)\) depending on whether Joel’s <code class="language-plaintext highlighter-rouge">loanStatus</code> is or isn’t <code class="language-plaintext highlighter-rouge">C</code>.
With just one sample, the distributions are hard to distinguish.</p>
<p>Still, this scheme is easily thwarted by <em>averaging attacks</em>.
If the noise is sampled anew each time a query is made, then repeatedly making the same pair of queries generates many independent samples from \(N(1,20)\) or \(N(0,20)\), and enough queries would make it possible to distinguish these distributions easily.</p>
<p>As before, there is an intuitive defense: use the same noise for repeated queries. This defense introduces its own new attacks by making many syntactically-distinct but semantically-equivalent queries. Those attacks in turn suggest new defenses which suggest new attacks, and so on. Diffix is, in a sense, the result of this hypothetical arms race.</p>
<p>From a technical perspective, Diffix consists of three components, which together are intended to thwart these attacks. First, Diffix only accepts a limited subset of SQL and will categorically reject any query that does not fit this subset. These restrictions—including tight restrictions on <code class="language-plaintext highlighter-rouge">JOIN</code>s and on the number of mathematical functions in a single expression—limit the ability of an adversary to use the full power of SQL to access the database. The second component is a collection of data-dependent ad-hoc methods to prevent leaking information about individuals or very small subsets of users, including suppressing answers to queries about small numbers of users and flattening outliers.</p>
<p>The final component is Diffix’s layered noise. This noise is comprised of two individual noise terms added together: a <em>data-dependent</em> term whose variance is constant<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> and a <em>query-dependent</em> term whose variance depends on the complexity of the query. The data-dependent noise prevents naïve averaging attacks. It is a pseudorandom error where the seed of the pseudorandom function depends on individual data records that contribute to the query result. Semantically equivalent queries using different syntax will nonetheless share this error, so simply averaging the responses will not remove this noise.</p>
<p>The query-dependent noise prevents a naïve Dinur–Nissim style reconstruction attacks. A noise term of magnitude \(\Omega(1)\) is generated deterministically for each condition in the <code class="language-plaintext highlighter-rouge">WHERE</code> or <code class="language-plaintext highlighter-rouge">HAVING</code> clause of the SQL query, and the terms are added together. A Dinur–Nissim query is a random subset of the dataset that contains \(\Omega(n)\) records. The straightforward way of specifying such a query is to enumerate the subset record by record using \(\Omega(n)\) conditions:<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup></p>
<pre><code class="language-SQL">SELECT COUNT(*) FROM loans
WHERE loanStatus = 'C'
AND (clientId = 2007
OR clientId = 2018
...
OR clientId = 2991)
</code></pre>
<p>A query with \(\Omega(n)\) conditions is answered with noise with standard deviation \(\Omega(\sqrt{n})\), enough to thwart efficient reconstruction algorithms.</p>
<h3 id="carrying-out-reconstruction">Carrying out reconstruction</h3>
<p>The additional noise per SQL condition is the main obstacle to running a successful reconstruction attack on a database behind Diffix. As described above, the noise prevents the naive implementation of the reconstruction algorithm from receiving accurate enough answers to reconstruct the database using a reasonable number of queries.<br />
A natural approach is to use very few SQL conditions—ideally, just one—to make random-enough queries, each identifying a subset of the records in the dataset.
So the challenge is to formulate a large family of such queries that are accepted by Diffix’s restricted subset of SQL, using as few conditions as possible.</p>
<h4 id="the-cohennissim-attack">The Cohen–Nissim Attack</h4>
<p>Instead of specifying each row with a separate condition, the Cohen–Nissim attack<a href="https://arxiv.org/abs/1810.05692">[CN18]</a> uses an ad hoc <em>hash function</em> to extract entropy from the data itself in order to systematically choose the needed subsets.<br />
Suppose we have a list of the values in the database’s <code class="language-plaintext highlighter-rouge">clientId</code> column, and we want to recover the <code class="language-plaintext highlighter-rouge">loanStatus</code> secret bit. Rather than explicitly enumerating the <code class="language-plaintext highlighter-rouge">clientId</code>s for a random subset of the rows to include in each query, we can write a boolean-valued function which evaluates to true on about half of the <code class="language-plaintext highlighter-rouge">clientId</code>s and ask Diffix to include only the rows for which the condition is true. In this way, instead of first choosing a subset of rows and then asking Diffix about those rows, we choose this function and use its evaluation to specify our random subset.</p>
<p>After some experimentation with the language restrictions, Cohen and Nissim settled on the following:</p>
<pre><code class="language-SQL">...
WHERE FLOOR(100 * ((clientId * 2)^0.7))
= FLOOR(100 * ((clientId * 2)^0.7) + 0.5)
</code></pre>
<p>Let’s see what this does. Let \(d=d_0.d_1 d_2 d_3 d_4 \dots \) be the decimal representation of the value \(d = (\mathtt{clientID}\cdot 2)^{0.7}\), which appears on both sides of the equality.
The expression is true if and only if \(d_3 < 5\).<br />
To see this, the left hand side evaluates to \(d_{0}d_{1}d_{2} = \lfloor 100d \rfloor\); the right hand side evaluates to \(d_{0}d_{1}d_{2}\) if \(d_3 < 5\) or \(d_{0}d_{1}d_{2}+1\) if \(d_3 \geq 5\). In the former case, the equality condition evaluates to ‘true’, and in the latter case it evaluates to ‘false’. Replacing 100 with other powers of 10 changes which digit in the decimal expansion is checked.</p>
<p>By varying the constants in the SQL query, this single expression yields a whole family of conditions, albeit a very ad-hoc one. The hope was that, for different primes \(q\) and fractional exponents \(p\), the individual digits of the decimal representations of \((\mathtt{clientID}*q)^p\) would be random enough for reconstruction to work.
The complete attack queries looked like this:</p>
<pre><code class="language-SQL">
SELECT COUNT(clientId) FROM loans
WHERE FLOOR(100 * ((clientId * 2)^.7))
= FLOOR(100 * ((clientId * 2)^.7) + 0.5)
AND clientId BETWEEN 2000 and 3000
AND loanStatus = 'C'
</code></pre>
<p>The range condition at the end simply selects a subset of the data which is small enough for the attack to run quickly on a personal computer but large enough to satisfy the requirements of the Diffix bounty program. This family of queries allows for a linear program to reconstruct the secret <code class="language-plaintext highlighter-rouge">loanStatus</code> bits with high accuracy.</p>
<p>In the course of verifying the attack for the Diffix bounty program, reconstruction was carried out on 4 different ranges of <code class="language-plaintext highlighter-rouge">clientId</code>s containing 455 records. For each record, the attack correctly determined whether or not the corresponding <code class="language-plaintext highlighter-rouge">loanStatus</code> was <code class="language-plaintext highlighter-rouge">C</code>.</p>
<p>Aircloak’s response to this attack was to further restrict the queries allowed by Diffix. Columns like <code class="language-plaintext highlighter-rouge">clientId</code>, where most of the values correspond to a single user, are tagged as ‘isolating’, and mathematical functions can no longer be used on such columns. The hope was that this modification would prevent the extraction of entropy from an identifying column via hashing.</p>
<h4 id="the-dickjosephschutzman-attack">The Dick–Joseph–Schutzman Attack</h4>
<p>Without the ability to directly use a uniquely identifying column from the database itself, we need another way to single out rows of the database. We can use an idea that’s been around since the 1990s, when Latanya Sweeney showed<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup> that almost 90 percent of Americans could be identified with only a date of birth, ZIP code, and gender, but of course each of these alone is nowhere near sufficient to isolate a single individual. We can use this idea and try to evade the modification to Diffix by choosing multiple non-isolating columns which, when taken together, can isolate rows in the database.</p>
<p>This modified attack uses the <code class="language-plaintext highlighter-rouge">pickup_latitude</code> column in the <code class="language-plaintext highlighter-rouge">taxi</code> data set as the source of entropy, which is non-isolating, in part because there are a large number of rows where the value is recorded as zero. We can combine this column with the <code class="language-plaintext highlighter-rouge">trip_distance</code> column and run queries of the following form:</p>
<pre><code class="language-SQL">
SELECT COUNT(*) FROM rides
WHERE FLOOR(pickup_latitude ^ 8.789 + 0.5)
= FLOOR(pickup_latitude ^ 8.789)
AND trip_distance IN (0.87, 1.97, 2.75)
AND payment_type = 'CSH'
</code></pre>
<p>This example query is part of an attack to recover the <code class="language-plaintext highlighter-rouge">payment_type</code> column, which (for the purposes of this attack) is a binary column containing two values: <code class="language-plaintext highlighter-rouge">CRD</code> (for credit card payments) and <code class="language-plaintext highlighter-rouge">CSH</code> (for cash payments). The <code class="language-plaintext highlighter-rouge">IN (0.87, 1.97, 2.75)</code> restricts to a subset of the data with about 450 rows, each with a distinct value for <code class="language-plaintext highlighter-rouge">pickup_latitude</code>. However, because across the whole database, very few rows have a distinct value in this column, Diffix does not consider it ‘isolating’ and it can be used as Cohen–Nissim used <code class="language-plaintext highlighter-rouge">clientId</code>. The values in <code class="language-plaintext highlighter-rouge">pickup_latitude</code> are recorded to six decimal places of precision and the least significant four of them are essentially random digits. By choosing an appropriate range for the exponent and using the same trick as in the Cohen–Nissim attack, allows the construction of a Diffix-accepted query which includes around half of the rows in the targeted subset. Using different values for the exponent leads to a large family of queries which allow the attack to be carried out as before with similarly high accuracy of over 95 percent.</p>
<p>Dick–Joseph–Schutzman additionally extends this attack to recover <em>numerical</em> rather than just binary secret data. By using queries of the form</p>
<pre><code class="language-SQL">SELECT SUM(passenger_count) FROM rides ...
</code></pre>
<p>Diffix will return return noisy sums over the specified subset for a numeric column like <code class="language-plaintext highlighter-rouge">passenger_count</code>. Then, a similar linear program can reconstruct estimates for these values with high accuracy. For numeric columns like <code class="language-plaintext highlighter-rouge">passenger_count</code> which take on relatively few distinct values, the attack recovers the exact values with accuracy above 75 percent. Due to limitations in the Diffix bounty program rules which require perfect reconstruction of a value to be considered ‘accurate’, we didn’t evaluate the performance of the attack on numeric columns with richer values, such as <code class="language-plaintext highlighter-rouge">dropoff_latitude</code>.</p>
<p>Finally, this attack extends to one used to reconstruct string data character-by-character. A U.S. social security number consists of a string formatted like <code class="language-plaintext highlighter-rouge">xxx-xx-xxxx</code> with none unknown digits in three blocks separated by dashes. There are potentially one billion different strings that could appear in this column. However, by exploiting the structure of the data, a separate attack can be run to recover each digit individually using the summation attack, since there are only ten different values each digit could take. Queries of the form</p>
<pre><code class="language-SQL">
SELECT SUM(CAST(SUBSTRING(ssn, 3, 1) AS integer)) FROM rides ....
</code></pre>
<p>can be used to recover the 3rd digit from each row’s social security number. Running this attack for each digit then aggregating the individual guesses to construct a guess for each user’s entire social security number allows the attack to achieve perfect reconstruction on about 90 percent of the values. A similar attack worked on the <code class="language-plaintext highlighter-rouge">pickup_datetime</code> and <code class="language-plaintext highlighter-rouge">dropoff_datetime</code> columns, with separate attacks on the value in the seconds position, the minutes position, and so on, and finally piecing these together to correctly reconstruct about 85 percent of the values.</p>
<p>Again, Aircloak’s response was to restrict the query language. Both of the successful attacks relied on the use of some arithmetic inside of a <code class="language-plaintext highlighter-rouge">FLOOR</code> function to check whether or not a row is included in a particular query. Diffix now forbids the use of arithmetic with <em>bucketing functions</em> such as <code class="language-plaintext highlighter-rouge">FLOOR</code>, <code class="language-plaintext highlighter-rouge">CEIL</code>, <code class="language-plaintext highlighter-rouge">ROUND</code>, etc. This defeats strategies which choose random-ish subsets via this kind of hashing, but does not necessarily preclude the extraction of entropy from the data in other ways.</p>
<h4 id="whats-next">What’s Next?</h4>
<p>We’d again like to thank Aircloak for opening their system to attacks and critiques through the Diffix bounty program. By being so willing to expose their product in this way, they have provided a test bed for us to bridge the gap between theory and application and demonstrate how a linear reconstruction attack might work in practice. Vulnerability to these and other attacks are a potential threat to any data privacy system which does not account for the cumulative threat to privacy that may result from many seemingly-innocuous queries, not just Diffix. The attacks we describe here only require the attacker have access to some subset of the data with a sufficient amount of entropy, and while more entropy allows for more complete reconstruction, it may be possible to use something potentially very accessible like a list of users’ email addresses in this kind of attack to reconstruct a non-trivial amount of the secret data using queries against a system that adds independent noise to each query. Systems like this fall into the trap of the classic arms race, where a designer builds a system to protect against certain attacks, then a clever and determined adversary defeats the system, and the designer is forced to make revisions. This cycle may never terminate, leaving us perpetually unsure of when we can be confident that a system is secure enough to trust with sensitive data.</p>
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>Descriptions of Diffix and Aircloak are based on <a href="https://www.aircloak.com">https://www.aircloak.com</a>, <a href="https://arxiv.org/pdf/1806.02075.pdf">https://arxiv.org/pdf/1806.02075.pdf</a>, <a href="https://demo.aircloak.com/docs/">https://demo.aircloak.com/docs/</a>, and the authors’ participation in the Aircloak bounty program. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>The variance is proportional to the largest effect any single user has on the output. For <code class="language-plaintext highlighter-rouge">COUNT</code> queries, this largest contribution is 1, and for <code class="language-plaintext highlighter-rouge">SUM</code> queries, it’s roughly the magnitude of the largest value in the column. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>Note that Diffix’s syntax restrictions don’t allow disjunctions (using <code class="language-plaintext highlighter-rouge">OR</code>s). An equivalent way of writing this that is allowed by Diffix would use <code class="language-plaintext highlighter-rouge">...WHERE ... AND clientId IN (2007, 2018,...)</code>. For such conditions, Diffix adds a noise layer for each element of the <code class="language-plaintext highlighter-rouge">IN</code> condition. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Sweeney, Latanya. “Simple demographics often identify people uniquely.” Health (San Francisco) 671.2000 (2000): 1-34. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Aloni CohenSasho NikolovZachary SchutzmanJonathan UllmanTue, 27 Oct 2020 00:11:38 -0400
https://differentialprivacy.org/diffix-attack/
https://differentialprivacy.org/diffix-attack/The Theory of Reconstruction Attacks<p>We often see people asking whether or not differential privacy might be overkill. Why do we need strong privacy protections like differential privacy when we’re only releasing approximate, aggregate statistical information about a dataset? Is it really possible to extract information about specific users from releasing these statistics? The answer turns out to be a resounding yes! The textbook by Dwork and Roth <a href="https://www.cis.upenn.edu/~aaroth/privacybook.html">[DR14]</a> calls this phenomenon the Fundamental Law of Information Recovery:</p>
<blockquote>
<p>Giving overly accurate answers to too many questions will inevitably destroy privacy.</p>
</blockquote>
<p>So what exactly does this fundamental law mean precisely, and how can we prove it? We can formalize and prove the law via <em>reconstruction attacks</em>, where an attacker can recover secret information from nearly every user in the dataset, simply by observing noisy answers to a modestly large number of (surprisingly simple) queries on the dataset. Reconstruction attacks were introduced in a seminal paper by Dinur and Nissim in 2003 <a href="https://dl.acm.org/doi/10.1145/773153.773173">[DN03]</a>. Although this paper predates differential privacy by a few years, the discovery of reconstruction attacks directly led to the definition of differential privacy, and shaped a lot of the early research on the topic. We now know that differentially private algorithms can, in some cases, match the limitations on accuracy implied by reconstruction attacks. When this is the case, we have a remarkably sharp transition from a blatant privacy violation when the accuracy is high enough to enable a reconstruction attack, to the strong protection given by differential privacy at the cost of only slightly lower accuracy.</p>
<p>Aside from the theoretical importance of reconstruction attacks, one may wonder if they can be carried out in practice, or if the attack model is unrealistic and can be avoided with some simple workarounds? In this series of posts, we argue that reconstruction attacks can be quite practical. In particular, we describe successful attacks by some of this post’s authors on a family of systems called <em>Diffix</em>, that attempt to prevent reconstruction without introducing as much noise as the reconstruction attacks suggest is necessary. To the best of our knowledge, these attacks represent the first successful attempt to reconstruct data from a commercial statistical-database system that is specifically designed to protect the privacy of the underlying data. A larger and much more significant demonstration of the practical power of reconstruction attacks was carried out by the US Census Bureau in 2018, motivating the Bureau’s adoption of differential privacy for data products derived from the 2020 decennial census <a href="https://queue.acm.org/detail.cfm?ref=rss&id=3295691">[GAM18]</a>.</p>
<p>This series will come in two parts: In this post, we will review the theory of reconstruction attacks, and present a model for reconstruction attacks that corresponds more directly to real attacks than the one that is typically presented. In the second post, we will describe attacks that were launched against various iterations of the <em>Diffix</em> system. \(
\newcommand{\uni}{\mathcal{X}} % The universe
\newcommand{\usize}{T} % Universe size
\newcommand{\elem}{x} % Generic universe element.
\newcommand{\pbs}{z} %Non-secret bits
\newcommand{\pbsuni}{\mathcal{Z}}
\renewcommand{\sb}{b} % Secret bit
\newcommand{\pds}{Z} %non-secret part of the data set
\newcommand{\ddim}{d} % Data dimension
\newcommand{\queries}{Q} % A set/workload of queries
\newcommand{\qmat}{\mat{Q}} % Query matrix
\newcommand{\qent}{w} % Entry of the query matrix
\newcommand{\hist}{h} % Histogram vector
\newcommand{\mech}{\mathcal{M}} % Generic Mechanism
\newcommand{\query}{q}
\newcommand{\queryfunc}{\varphi}
\newcommand{\ans}{a} % query answer
\newcommand{\qsize}{k}
\newcommand{\ds}{X}
\newcommand{\dsrow}{\elem} % same as elem above
\newcommand{\dsize}{n}
\newcommand{\priv}{\eps}
\newcommand{\privd}{\delta}
\newcommand{\acc}{\alpha}
\newcommand{\from}{:}
\newcommand{\set}[1]{\left{#1\right}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\Z}{\mathbb{Z}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\var}{\mathrm{Var}}
\newcommand{\I}{\mathbb{I}}
\newcommand{\tr}{\mathrm{Tr}}
\newcommand{\eps}{\varepsilon}
\newcommand{\pmass}{\mathbbm{1}}
\newcommand{\zo}{\{0,1\}}
\newcommand{\mat}[1]{#1} % matrix notation: for now nothing
\)</p>
<h3 id="a-model-of-reconstruction-attacks">A Model of Reconstruction Attacks</h3>
<p>This part presents the basic theory of reconstruction attacks. We’ll introduce a model of reconstruction attacks that is a little different from what you would see if you read the papers, and then describe the main results of Dinur and Nissim. At the end we will briefly mention some variations that have been considered in the nearly two decades since.</p>
<p>Let us fix a dataset model, so that we can describe the attack precisely. (These attacks are very flexible and the ideas can usually be adapted to new models, as we’ll see at the end of this part.) We take the dataset to be a collection of \(\dsize\) records \(\ds = \{\elem_1,\dots,\elem_n\}\), each corresponding to the data of a single person. The attacker’s goal is to learn some piece of secret information about as many individuals as possible, so we think of each record as having the form \(\elem_i = (\pbs_i,\sb_i)\) where \(\pbs_i\) is some identifying information, and \(\sb_i \in \zo\) is some secret. We assume that the secret is binary, although this aspect of the model can be generalized. We can visualize such a dataset as a matrix \([\pds \mid \sb]\) with two blocks as follows:
\[ \left[ \begin{array}{c|c} \pbs_1 & \sb_1 \\ \vdots & \vdots \\ \pbs_n & \sb_n \end{array} \right] \]
For a concrete example, suppose each element in the dataset contains \(d\) binary attributes, and the attacker’s goal is to learn the last attribute of each user. In this case we would write each element as a pair \((\pbs_i, \sb_i)\) where \(\pbs_i \in \zo^{d-1}\) and \(\sb_i \in \zo\).</p>
<p>Note that this distinction between \(\pbs_i\) and \(\sb_i\) is only in the mind of the attacker, who has some prior information about the users, but is trying to learn some specific secret information. In order to make the attack simpler to describe, we will also assume that the attacker knows \(\pbs_1,\dots,\pbs_\dsize\), which is everything about the dataset except the secret bits, although this assumption can also be relaxed to a large extent. As a shorthand, we will refer to \(\pbs_1, \ldots, \pbs_\dsize\) as the prior information, and to \(\sb_1, \ldots,\sb_\dsize\) as the secret bits.</p>
<p>Our goal is to understand whether asking aggregate queries defined by the prior information can allow an attacker to learn non-trivial information about the secret bits. Perhaps the most basic type of aggregate query we can ask is a <em>counting query</em>, which is a query that asks what number of the data points satisfy a given property. The Dinur-Nissim attacks assume that the attacker can get approximate answers to a type of counting queries that ask how many data points satisfy some property defined in terms of the prior information, and also have the sensitive bit set to \(1\). Let us use the notation \(\pbsuni\) for the set of all possible values that the prior information can take. For the purposes of the attack, each query \(\query\) will be specified by a function \(\queryfunc \from \pbsuni \to \zo\) and have the specific form
\[
\query(\ds) = \sum_{j=1}^{\dsize} \queryfunc(\pbs_j) \cdot \sb_j.
\]
This is a good time to make one absolutely crucial point about this model, which is that</p>
<blockquote>
<p>all the users are treated completely symmetrically by the queries, and the attacker cannot issue a query that targets a specific user \(x_i\) by name or a specific subset of users. The different users are distinguished only by their data. Nonetheless, we will see how to learn information about specific users from the answers to these queries.</p>
</blockquote>
<p>Returning to our example with binary attributes, consider the very natural set of queries that asks for the inner product of the secret bits with each attribute in the prior information, which is a measure of the correlation between these two attributes. Then each query takes the form \(\query_i(\ds) = \sum_{j=1}^{n} \pbs_{j,i} \cdot \sb_{j}\).</p>
<p>The nice thing about this type of query is that we can express the answers to a set of queries \({\query_1,\dots,\query_\qsize}\) defined by \(\queryfunc_1, \ldots, \queryfunc_\qsize\) as the following matrix-vector product \(\qmat_{\pds}\cdot \mat{b}\):
\[ \left[ \begin{array}{c}\query_1(\ds) \\ \vdots \\ \query_\qsize(\ds) \end{array} \right] = \left[ \begin{array}{ccc} \queryfunc_1(\pbs_1) & \dots & \queryfunc_1(\pbs_\dsize) \\ \vdots & \ddots & \vdots \\ \queryfunc_\qsize(\pbs_1) & \dots & \queryfunc_k(\pbs_\dsize) \end{array} \right] \left[ \begin{array}{c} \sb_1 \\ \vdots \\ \sb_n \end{array} \right]
\]
so we can study this model using tools from linear algebra.</p>
<h3 id="an-inefficient-attack">An Inefficient Attack</h3>
<p>Exact answers to such queries are clearly revealing, because, the attacker can use the predicates \[ \queryfunc_i(z) = \begin{cases} 1 & \textrm{if } \pbs = \pbs_i \\ 0 & \textrm{otherwise} \end{cases} \] to single out a specific user and receive their bit \(\sb_i\). It is less obvious, however, that an attacker can learn a lot about the private bits even given noisy answers to the queries.</p>
<p>The first Dinur-Nissim attack shows that this is indeed possible—if the attacker can ask an unbounded number of counting queries, and each query is answered with, for example, 5% error, then the attacker can reconstruct 80% of the secret bits. This attack requires exponentially many queries to run, making it somewhat impractical, but it is a proof of concept that an attack can reconstruct a large amount of private information even from very noisy statistics. Later we will see how to scale down the attack to use fewer queries at the cost of requiring more accurate answers.</p>
<p>The attack itself is quite simple:</p>
<ul>
<li>
<p>For simplicity, assume all the \(\pbs_1, \ldots, \pbs_\dsize\) are distinct so that each user is uniquely identified by the prior information.</p>
</li>
<li>
<p>The attacker chooses the queries \(\query_1, \ldots, \query_\qsize\) so that the matrix \(\qmat_\pds\) has as its rows all of \(\zo^\dsize\). Namely, \(\qsize=2^\dsize\) and the functions \(\queryfunc_1, \ldots, \queryfunc_\qsize\) defining the queries take all possible values on \(\pbs_1, \ldots, \pbs_\dsize\).</p>
</li>
<li>
<p>The attacker receives a vector \(\ans\) of noisy answers to the queries, where \( |\query_{i}(\ds) - \ans_{i}| < \acc \dsize \) for each query \( \query_i \). In matrix notation, this means \[ \max_{i = 1}^\qsize |(\qmat_\pds\cdot {\sb})_i -\ans_i|= \| \qmat_\pds \cdot \sb -\ans\|_\infty \leq \alpha \dsize. \]
Note that, for \(\{0,1\}\)-valued queries, the answers range from \(0\) to \(\dsize\), so answers with additive error \(\pm 5\%\) corresponds to \(\acc = 0.05\).</p>
</li>
<li>
<p>Finally, the attacker outputs any guess \(\hat{\sb} = (\hat{\sb}_{1}, \ldots, \hat{\sb}_{n})\) of the private bits vector that is consistent with the answers and the additive error bound \(\acc\). In other words, \(\hat{\sb}\) just needs to satisfy \[\max_{i = 1}^\qsize |\ans_i - (\qmat_\pds\cdot \hat{\sb})_i|= \| \qmat_\pds \cdot \hat\sb - a \|_{\infty} \leq \alpha \dsize \]
Note that a solution always exists, since the true private bits \(\sb\) will do.</p>
</li>
</ul>
<p>Our claim is that any such guess \(\hat{b}\) in fact agrees with the true private bits \(b\) for all but \(4\acc \dsize\) of the users. The reason is that if \(\hat{\sb}\) disagreed with more than \(4\acc \dsize\) of the secret bits, then the answer to some query would have eliminated \(\hat{\sb}\) from contention. To see this, fix some \(\hat{\sb}\in \zo^\dsize\), and let \[ S_{01} = \{j: \hat{\sb}_j = 0, \sb_j = 1\} \textrm{ and } S_{10} = \{j: \hat{\sb}_j = 1, \sb_j = 0\}\]
If \(\hat{\sb}\) and \(\sb\) disagree on more than \(4\acc \dsize\) bits, then at least one of these two sets has size larger than \(2\acc \dsize\). Let us assume that this set is \(S_{01}\), and we’ll deal with the other case by symmetry. Suppose that the \(i\)-th row of \(\qmat_\pds\) is the indicator vector of \(S_{01}\), i.e., \[(\qmat_\pds)_{i,j} = 1 \iff j \in S_{01}.\] We then have
\[
|(\qmat_{\pds}\cdot {\sb})_i - (\qmat_{\pds}\cdot \hat{\sb})_i|= |S_{01}| > 2 \acc \dsize,
\]
but, at the same time, if \(\hat{\sb}\) were output by the attacker, we would have
\[
|(\qmat_{\pds}\cdot {\sb})_i - (\qmat_{\pds}\cdot \hat{\sb})_i| \le |\ans_i - (\qmat_\pds\cdot \hat{\sb})_i| + |(\qmat_\pds \cdot \sb)_i - \ans_{i}| \le 2\acc \dsize, \]
which is a contradiction. An important point to note is that the attacker does not need to know the set \(S_{10}\), or the corresponding \(i\)-th row of \(\qmat_\pds\) and query \(\query_i\). Since the attacker asks all possible queries determined by the prior information, we can be sure \(\query_i\) is one of these queries, and an accurate answer to it rules out this particular bad choice of \(\hat{\sb}\). To give you something concrete to cherish, we can summarize this discussion in the following theorem.</p>
<blockquote>
<p><strong>Theorem <a href="https://dl.acm.org/doi/10.1145/773153.773173">[DN03]</a>:</strong> There is a reconstruction attack that issues \(2^n\) queries to a dataset of \(n\) users, obtains answers with error \(\alpha n\), and reconstructs the secret bits of all but \(4 \alpha n\) users.</p>
</blockquote>
<h3 id="an-efficient-attack">An Efficient Attack</h3>
<p>The exponential Dinur-Nissim attack is quite powerful, as it recovers 80% of the secret bits even from answers with 5% error, but it has the drawback that it requires asking \(2^\dsize\) queries to a dataset with \(\dsize\) users. Note that this is inherent to some extent. Suppose we randomly subsample 50% of the dataset and answer the queries using only this subset by rescaling appropriately. Although this random subsampling does not guarantee any meaningful privacy, clearly no attacker can reconstruct 75% of the secret bits, since some of them are effectively deleted. However, the guarantees of random sampling tell us that any set of \(\qsize\) queries will be answered with maximum error \( \acc n = O(\sqrt{n \log \qsize})\), so we can answer \( 2^{\Omega(n)} \) queries with \(5\%\) error while provably preventing this sort of reconstruction.</p>
<p>However, Dinur and Nissim showed that if we obtain <em>highly accurate</em> answers—still noisy, but with error smaller than the sampling error—then we can reconstruct the dataset to high accuracy. We can also make the reconstruction process computationally efficient by using linear programming to replace the exhaustive search over all \(2^\dsize\) possible vectors of secrets. Specifically, we change the attack as follows:</p>
<ul>
<li>
<p>The attacker now chooses \(\qsize\) <em>randomly chosen</em> functions \( \varphi_i \from \pbsuni \to \{0,1\} \) for a much smaller \(\qsize = O(\dsize) \).</p>
</li>
<li>
<p>Upon receiving an answer vector \(\ans\), the attacker now searches for a <em>real-valued</em> \( \tilde{b} \in [0,1]^{\dsize} \) such that \( \| \ans - \qmat_\pds \cdot \tilde{b} \|_{\infty} \leq \acc n \). Note that this vector can be found efficiently via linear programming. The attacker then rounds each \( \tilde{b}_{i} \) to the nearest \( \hat{b}_{i} \in \{0,1\}\).</p>
</li>
</ul>
<p>It’s now much trickier to analyze this attack and show that it achieves low reconstruction error, and we won’t go into details in this post. However, the key idea is that, because the queries are chosen randomly, \( \qmat_\pds \) is a random matrix with entries in \( \{0,1\} \), and we can use the statistical properties of this random matrix to argue that, with high probability,
\[
\|\qmat_\pds \cdot \sb - \qmat_\pds \cdot \tilde{\sb}\|_\infty^2 \gtrsim |{i: \sb_i \neq \hat{\sb}_i}|.
\]
By the way we chose \(\tilde{\sb}\), we have
\[
\|\qmat_\pds \cdot \sb - \qmat_\pds \cdot \tilde{\sb}\|_\infty \le \|\qmat_\pds \cdot \sb - \ans\|_\infty + \| \ans - \qmat_\pds \cdot \tilde{b} \|_{\infty} \leq 2\acc n,
\]
so, by combining the inequalities we get that the reconstruction error is about \( O(\alpha^2 n^2) \). Note that, in order to reconstruct 80% of the secret bits using this attack, we now need the error to be \( \alpha n \ll \sqrt{n} \), but as long as this condition on the error is satisfied, we will have a highly accurate reconstruction. Let’s add this theorem to your goodie bags:</p>
<blockquote>
<p><strong>Theorem <a href="https://dl.acm.org/doi/10.1145/773153.773173">[DN03]</a>:</strong> There is an efficient reconstruction attack that issues \(O(n)\) random queries to a dataset of \(n\) users, obtains answer with error \(\alpha n\), and, with high probability, reconstructs the secret bits of all but \( O(\alpha^2 n^2)\) users.</p>
</blockquote>
<p>Although we modeled the queries, and thus the matrix \(\qmat_\pds\) as uniformly random, it’s important to note that we really only relied on the fact that
\[
\|\qmat_\pds \cdot \sb - \qmat_\pds \cdot \tilde{\sb}\|_\infty^2 \gtrsim
|\{i: \sb_i \neq \hat{\sb}_i\}|,
\]
and we can reconstruct while tolerating the same \(\Omega(\sqrt{n})\) error for any family of queries that gives rise to a matrix with this property. Intuitively, any <em>random-enough</em> family of queries will have this property. More specifically, the property is satisfied by any matrix with no small singular values <a href="https://dl.acm.org/doi/10.1007/978-3-540-85174-5_26">[DY08]</a> or with large discrepancy <a href="https://arxiv.org/abs/1203.5453">[MN12]</a>. There is a large body of work showing that many specific families of queries lead to reconstruction. For example, we can perform reconstruction using <em>conjunction queries</em> that ask for the marginal distribution of small subsets of the attributes <a href="https://dl.acm.org/doi/abs/10.1145/1806689.1806795">[KLSU10]</a>. That is, queries of the form “count the number of people with blue eyes and brown hair and a birthday in August.” In fairness, there are also families of queries that do not satisfy the property, or only satisfy quantitatively weaker versions of it, such as histograms and threshold queries, and for these queries it is indeed possible to achieve differential privacy with \( \ll \sqrt{n} \) error.</p>
<h3 id="conclusion">Conclusion</h3>
<p>This is going to be the end of our technical discussion, but before signing off, let’s mention some of the important extensions of this theorem that have been developed over the years:</p>
<ul>
<li>
<p>We can allow the secret information \(\sb\) to be integers or real numbers, rather than bits. The queries still return \(\qmat_\pds\cdot \sb\). The exponential attack then guarantees that, given answers with error \(\acc n\), the reconstruction \(\hat{\sb}\) satisfies \(\|\hat{\sb}-\sb\|_1 \le 4\acc n\). This means, for example, that the reconstructed secrets of all but \(4\alpha n\) users are within \(\pm 1\) of the true secrets. The efficient attack guarantees that \(\|\hat{\sb}-\sb\|_2^2 \le O(\acc^2 n^2)\), which means that the reconstructed secrets are within \(\pm 1\) for all but \(O(\acc^2 n^2)\) users.</p>
</li>
<li>
<p>It’s not crucial that <em>every</em> query be answered with error \( \ll \sqrt{n} \). If we are willing to settle
for an inefficient attack, then we can reconstruct even if only 51% of the queries have small error. If at least 75% have small error, then we can reconstruct efficiently <a href="https://dl.acm.org/doi/10.1145/1250790.1250804">[DMT07]</a>.</p>
</li>
<li>
<p>The reconstruction attacks still apply to the seemingly more general data model in which the private
dataset \(\ds\) is a subset of some arbitrary (but public) data universe \(\uni\). To see this, note that we can take \(\uni = \{\pbs_1, \ldots, \pbs_\dsize\}\), and we can interpret the secret bits \(\sb_i\) to indicate whether \(\pbs_i\) is an element of \(\ds\). Then the reconstruction attacks allow us to determine, up to some error, which elements of \(\uni\) are contained in \(\ds\). In the setting, the attack is sometimes called <em>membership inference</em>.</p>
</li>
<li>
<p>The fact that the efficient Dinur-Nissim reconstruction attack fails when the error is \( \gg \sqrt{n} \)
does not mean it’s easy to achieve privacy with error of that magnitude. As we mentioned earlier, we can achieve non-trivial error guarantees for a large number of queries simply by using a random subsample of half of the dataset, which is not a private algorithm in any reasonable sense of the word, as it can reveal everything about the chosen subset. As this example shows,</p>
<blockquote>
<p>preventing reconstruction attacks does not mean preserving privacy.</p>
</blockquote>
<p>In particular, there are membership-inference attacks that succeed in violating privacy even when the queries are answered with \( \gg \sqrt{n}\) error. We refer the reader to the survey <a href="https://privacytools.seas.harvard.edu/publications/exposed-survey-attacks-private-data">[DSSU17]</a> for a somewhat more in-depth survey of reconstruction and membership-inference attacks.</p>
</li>
</ul>
<p>Many types of queries give rise to the conditions under which reconstruction is possible. Stay tuned for our next post, where we show how to generate those types of queries in practice against a family of systems known as <em>Diffix</em> that are specifically designed to thwart reconstruction.</p>
Aloni CohenSasho NikolovZachary SchutzmanJonathan UllmanWed, 21 Oct 2020 12:30:00 -0400
https://differentialprivacy.org/reconstruction-theory/
https://differentialprivacy.org/reconstruction-theory/