Differential PrivacyWebsite for the differential privacy research community
https://differentialprivacy.org
Tight RDP & zCDP Bounds from Pure DP<p>There are multiple ways to quantify differential privacy, including pure DP [<a href="https://journalprivacyconfidentiality.org/index.php/jpc/article/view/405" title="Cynthia Dwork, Frank McSherry, Kobbi Nissim, Adam Smith. Calibrating Noise to Sensitivity in Private Data Analysis. 2006.">DMNS06</a>], approximate DP [<a href="https://link.springer.com/chapter/10.1007/11761679_29" title="Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, Moni Naor. Our Data, Ourselves: Privacy Via Distributed Noise Generation. 2006.">DKMMN06</a>], Concentrated DP [<a href="https://arxiv.org/abs/1603.01887" title="Cynthia Dwork, Guy N. Rothblum. Concentrated Differential Privacy. 2016.">DR16</a>,<a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. 2016.">BS16</a>], Rényi DP [<a href="https://arxiv.org/abs/1702.07476" title="Ilya Mironov. Rényi Differential Privacy. 2017.">M17</a>], Gaussian DP [<a href="https://arxiv.org/abs/1905.02383" title="Jinshuo Dong, Aaron Roth, Weijie J. Su. Gaussian Differential Privacy. 2019.">DRS19</a>], & function-DP [<a href="https://arxiv.org/abs/1905.02383" title="Jinshuo Dong, Aaron Roth, Weijie J. Su. Gaussian Differential Privacy. 2019.">DRS19</a>].
Fortunately, these definitions are similar enough that we can convert between most of them (with some loss in parameters).</p>
<p>In this post, we consider converting from pure DP to Rényi DP and Concentrated DP. In particular, we will provide optimal results, which are an improvement on what is currently in the literature.
But first, let’s recap the relevant definitions.</p>
<h2 id="definitions-pure-dp-rényi-dp--zcdp">Definitions: Pure DP, Rényi DP, & zCDP</h2>
<p>For notational simplicity, we will assume the output space of the algorithms is discrete and that the algorithms’ output distributions have full support.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup></p>
<blockquote>
<p><strong>Definition 1 (Pure DP):</strong>
A randomized algorithm \(M : \mathcal{X}^n \to \mathcal{Y}\) satisfies \(\varepsilon\)-differential privacy if, for all pairs of inputs \(x, x’ \in \mathcal{X}^n\) differing only on the data of a single individual, we have \[\forall y \in \mathcal{Y} ~~~~~ \log\left(\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]}\right) \le \varepsilon.\]</p>
</blockquote>
<p>Pure DP is the simplest (and first) definition and is very convenient for analysis.
Pure DP can also be called pointwise DP because the guarantee holds for all points \(y\), whereas all the other definitions either bound some quantity averaged over \(y\) or quantify over sets \(S \subseteq \mathcal{Y}\).</p>
<blockquote>
<p><strong>Definition 2 (Rényi DP):</strong>
A randomized algorithm \(M : \mathcal{X}^n \to \mathcal{Y}\) satisfies \((\alpha,\widehat\varepsilon)\)-Rényi differential privacy if, for all pairs of inputs \(x, x’ \in \mathcal{X}^n\) differing only on the data of a single individual, we have \[ \frac{1}{\alpha-1} \log \left( \underset{Y \gets M(x’)}{\mathbb{E}}\left[ \left( \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right)^\alpha \right] \right) \le \widehat\varepsilon.\]</p>
</blockquote>
<p>Rényi DP is a more flexible definition than pure DP. But this flexibility comes at the cost of complexity.
The definition has two parameters, but we can usually trade off these parameters. Thus it is often better to think of it as being parameterized by a function \(\widehat\varepsilon(\alpha)\), which gives us a \((\alpha,\widehat\varepsilon(\alpha))\)-RDP bound for all \(\alpha>1\) simultaneously.
However, in many cases – such as the Gaussian mechanism – the function is linear, or can be bounded by a linear function.</p>
<blockquote>
<p><strong>Definition 3 (zero-Concentrated DP (zCDP)):</strong>
A randomized algorithm \(M : \mathcal{X}^n \to \mathcal{Y}\) satisfies \(\rho\)-zCDP if, for all pairs of inputs \(x, x’ \in \mathcal{X}^n\) differing only on the data of a single individual, we have \[ \forall \alpha > 1 ~~~~~ \frac{1}{\alpha-1} \log \left( \underset{Y \gets M(x’)}{\mathbb{E}}\left[ \left( \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right)^\alpha \right] \right) \le \alpha\rho.\]</p>
</blockquote>
<p>This definition is equivalent to satisfying \((\alpha,\rho\alpha)\)-RDP for all \(\alpha>1\); zCDP can be thought of as a single-parameter version of RDP, which gives us many of the benefits of RDP without the complexity.</p>
<h2 id="converting-pure-dp-to-rényi-dp">Converting Pure DP to Rényi DP</h2>
<p>It is immediate from the definitions that \(\varepsilon\)-DP implies \((\alpha,\varepsilon)\)-RDP for all \(\alpha>1\).<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>
This is just saying that the average value is at most the maximum value.
We can do better than this:</p>
<blockquote>
<p><strong>Theorem 4 (Pure DP to Rényi DP):</strong>
Let \(M : \mathcal{X}^n \to \mathcal{Y}\) be a randomized algorithm satisfying \(\varepsilon\)-differential privacy.
Then \(M\) satisfies \((\alpha,\widehat\varepsilon(\alpha))\)-Rényi DP for all \(\alpha>1\), where
\[ \widehat\varepsilon(\alpha) = \frac{1}{\alpha-1} \log \left( \frac{1}{e^\varepsilon+1} e^{\alpha \varepsilon} + \frac{e^\varepsilon}{e^\varepsilon+1} e^{-\alpha \varepsilon} \right) \]\[ = \varepsilon - \frac{1}{\alpha-1} \log \left( \frac{1+e^{-\varepsilon}}{1 + e^{-(2\alpha-1)\varepsilon}} \right). \]
Furthermore, this bound is tight.</p>
</blockquote>
<p><em>Proof.</em><sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>
Fix neighbouring inputs \(x, x’ \in \mathcal{X}^n\) and fix \(\alpha>1\).</p>
<p>First note that this bound is tight when \(M\) corresponds to randomized response.
That is, if \(M(x) = \mathsf{Bernoulli}(\tfrac{e^\varepsilon}{e^\varepsilon+1})\) and \(M(x’) = \mathsf{Bernoulli}(\tfrac{1}{e^\varepsilon+1})\), then the expression in the theorem statement is simply the expression in the definition of Rényi DP. Since this is consistent with \(M\) satisfying \(\varepsilon\)-DP, this proves tightness of the result.
To prove the result it only remains to show that randomized response is indeed the worst case \(M\).</p>
<p>We make two additional observations:
(1) The definition of pure DP implies \( \frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} \le e^\varepsilon \) for all \(y \in \mathcal{Y}\).
But the definition of pure DP is symmetric in \(x\) and \(x’\), so we can swap them and obtain a two-sided bound: \[ \forall y \in \mathcal{Y} ~~~~~ e^{-\varepsilon} \le \frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} \le e^\varepsilon.\]
(2) Since \(\sum_y \mathbb{P}[M(x)=y] = 1\), we have \[ \underset{Y \gets M(x’)}{\mathbb{E}}\left[ \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right] = \sum_y \mathbb{P}[M(x’)=y] \cdot \frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} = 1. \]</p>
<p>Now we define a randomized rounding function \(A : [e^{-\varepsilon},e^\varepsilon] \to \{e^{-\varepsilon},e^\varepsilon\}\) by \(\mathbb{E}_A [A(z)] = z \).
That is, for all \( z \in [e^{-\varepsilon},e^\varepsilon] \), we have \[\underset{A}{\mathbb{P}}[A(z)=e^\varepsilon]=\frac{z-e^{-\varepsilon}}{e^\varepsilon-e^{-\varepsilon}} ~~~ \text{ and } ~~~ \underset{A}{\mathbb{P}}[A(z)=e^{-\varepsilon}]=\frac{e^\varepsilon-z}{e^\varepsilon-e^{-\varepsilon}}.\]
Since \( v \mapsto v^\alpha \) is convex, by Jensen’s inequality, for all \( z \in [e^{-\varepsilon},e^\varepsilon] \), we have \[z^\alpha = \mathbb{E}_A[A(z)]^\alpha \le \mathbb{E}_A[A(z)^\alpha] = \frac{z-e^{-\varepsilon}}{e^\varepsilon-e^{-\varepsilon}} \cdot e^{\varepsilon\alpha} + \frac{e^\varepsilon-z}{e^\varepsilon-e^{-\varepsilon}} e^{-\alpha\varepsilon}. \]
Applying this inequality to the quantity of interest with \(z = \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \), we get
\[ \underset{Y \gets M(x’)}{\mathbb{E}}\left[ \left( \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right)^\alpha \right] \le \underset{Y \gets M(x’) }{\mathbb{E}}\left[ \frac{\frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]}-e^{-\varepsilon}}{e^\varepsilon-e^{-\varepsilon}} \cdot e^{\varepsilon\alpha} + \frac{e^\varepsilon-\frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]}}{e^\varepsilon-e^{-\varepsilon}} e^{-\alpha\varepsilon} \right] .\]
Observation 1 tells us that this is valid, since \(z \in [e^{-\varepsilon},e^\varepsilon]\). Observation 2 and linearity of expectations gives
\[ \underset{Y \gets M(x’) }{\mathbb{E}}\left[ \frac{\frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]}-e^{-\varepsilon}}{e^\varepsilon-e^{-\varepsilon}} \cdot e^{\varepsilon\alpha} + \frac{e^\varepsilon-\frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]}}{e^\varepsilon-e^{-\varepsilon}} e^{-\alpha\varepsilon} \right] = \frac{1-e^{-\varepsilon}}{e^\varepsilon-e^{-\varepsilon}} \cdot e^{\varepsilon\alpha} + \frac{e^\varepsilon-1}{e^\varepsilon-e^{-\varepsilon}} e^{-\alpha\varepsilon}.\]
We have \(\frac{1-e^{-\varepsilon}}{e^\varepsilon-e^{-\varepsilon}} = \frac{e^\varepsilon-1}{e^{2\varepsilon}-1} = \frac{e^\varepsilon-1}{(e^\varepsilon-1)(e^\varepsilon+1)} = \frac{1}{e^\varepsilon+1}\) and, similarly,\(\frac{e^\varepsilon-1}{e^\varepsilon-e^{-\varepsilon}} = \frac{e^\varepsilon}{e^\varepsilon+1}\).
Combining the equalities and inequalities gives \[ e^{(\alpha-1)\widehat\varepsilon(\alpha)} = \underset{Y \gets M(x’)}{\mathbb{E}}\left[ \left( \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right)^\alpha \right] \le \frac{1}{e^\varepsilon+1} e^{\alpha\varepsilon} + \frac{e^\varepsilon}{e^\varepsilon+1} e^{-\alpha\varepsilon},\] which establishes the result.
The equivalence of the two expressions in the theorem statement is a matter of algebraic manipulation; the second expression is more suitable for numerical computation.
∎</p>
<h2 id="converting-pure-dp-to-zcdp">Converting Pure DP to zCDP</h2>
<p>The RDP bound in Theorem 4 is tight, but a bit unwieldy. Now we look at zCDP bounds, which are looser but simpler.
The trivial bound gives that \(\varepsilon\)-DP implies \(\varepsilon\)-zCDP.
In <a href="/exponential-mechanism-bounded-range">a previous post</a> we proved that \(\varepsilon\)-DP implies \(\frac12\varepsilon^2\)-zCDP.<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup>
Now we prove a tight bound:</p>
<blockquote>
<p><strong>Theorem 5 (Pure DP to zCDP):</strong>
Let \(M : \mathcal{X}^n \to \mathcal{Y}\) be a randomized algorithm satisfying \(\varepsilon\)-differential privacy.
Then \(M\) satisfies \(\rho\)-zCDP for all \(\alpha>1\), where
\[ \rho = \frac{e^\varepsilon-1}{e^\varepsilon+1} \varepsilon \le \frac12 \varepsilon^2. \]
Furthermore, this bound is tight.</p>
</blockquote>
<p>To prove this result, we use the following result, which is a tighter version of <a href="https://en.wikipedia.org/wiki/Hoeffding%27s_lemma">Hoeffding’s lemma</a>.</p>
<blockquote>
<p><strong>Proposition 6 (Kearns-Saul inequality [<a href="https://arxiv.org/abs/1301.7392" title="Michael Kearns, Lawrence Saul. Large Deviation Methods for Approximate Probabilistic Inference. 2013.">KS13</a>,<a href="https://doi.org/10.1214/ECP.v18-2359" title="Daniel Berend, Aryeh Kontorovich. On the concentration of the missing mass. 2013.">BK13</a>,<a href="https://arxiv.org/abs/1901.09188" title="Julyan Arbel, Olivier Marchal, Hien D. Nguyen. On strict sub-Gaussianity, optimal proxy variance and symmetry for bounded random variables. 2019.">AMN19</a>]):</strong>
For all \(p \in [0,1]\) and all \(t\in\mathbb{R}\), we have \[1-p + p \cdot e^t \le \exp\left(t \cdot p + t^2 \cdot \frac{1-2p}{4\log((1-p)/p)}\right).\]</p>
</blockquote>
<p><em>Proof of Theorem 5.</em>
By Theorem 4, \(M\) satisfies \((\alpha,\widehat\varepsilon(\alpha))\)-Rényi DP for all \(\alpha>1\), where \[ e^{(\alpha-1)\widehat\varepsilon(\alpha)} = \frac{1}{e^\varepsilon+1} e^{\alpha \varepsilon} + \frac{e^\varepsilon}{e^\varepsilon+1} e^{-\alpha \varepsilon} .\]
We need to show \(\widehat\varepsilon(\alpha) \le \rho\alpha\) for all \(\alpha>1\). Fix \(\alpha>1\).</p>
<p>Let \(p = \tfrac{1}{e^\varepsilon+1}\). Then
\[ \frac{1}{e^\varepsilon+1} e^{\alpha \varepsilon} + \frac{e^\varepsilon}{e^\varepsilon+1} e^{-\alpha \varepsilon} = e^{-\alpha\varepsilon} \cdot \left( 1-p + p e^{2\alpha\varepsilon} \right) .\]
By the Kearns-Saul inequality, \[ e^{-\alpha\varepsilon} \cdot \left( 1-p + p e^{2\alpha\varepsilon} \right) \le \exp\left((2p-1)\alpha\varepsilon + ( 2 \alpha \varepsilon)^2 \cdot \frac{1-2p}{4\log((1-p)/p)}\right) .\]
Since \(2p-1 = - \tfrac{e^\varepsilon-1}{e^\varepsilon + 1}\) and \( \frac{1-p}{p} = e^\varepsilon \), this simplifies to \[ \exp\left((2p-1)\alpha\varepsilon + ( 2 \alpha \varepsilon)^2 \cdot \frac{1-2p}{4\log((1-p)/p)}\right) = \exp\left( -\alpha\varepsilon\frac{e^\varepsilon-1}{e^\varepsilon+1} + 4 \alpha^2 \varepsilon^2 \frac{\frac{e^\varepsilon-1}{e^\varepsilon+1}}{4\varepsilon} \right)\]\[ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ = \exp\left( (\alpha-1) \alpha \varepsilon \frac{e^\varepsilon-1}{e^\varepsilon+1} \right). \]
Combining the inequalities yields \( \widehat\varepsilon(\alpha) \le \alpha \varepsilon \frac{e^\varepsilon-1}{e^\varepsilon+1} \), which gives the result.</p>
<p>Tightness is witnessed by randomized response and by taking the limit \(\alpha \to 1\).
∎</p>
<h2 id="numerical-comparison">Numerical Comparison</h2>
<p>Let’s see what these improved bounds look like:</p>
<p><img src="/images/pdp2zcdp-purerenyi.png" width="700" alt="Plot showing the bound from Theorem 4 compared to the trivial bound and the bound implied by Theorem 5 for epsilon=0.5,1,2." style="margin:auto;display: block;" />
This first plot compares the tight Rényi DP bound from Theorem 4 (solid line) with the trivial bound (\(\widehat\varepsilon(\alpha)\le\varepsilon\), dotted line) and the bound implied by zCDP (\(\widehat\varepsilon(\alpha)\le\alpha\rho\), dashed line) via Theorem 5. We consider \(\varepsilon=\frac12\) (<font color="red">red</font> lines, bottom), \(\varepsilon=1\) (<font color="green">green</font> lines, middle), and \(\varepsilon=2\) (<font color="blue">blue</font> lines, top).</p>
<p>We see that the trivial bound is tight as the Rényi order \(\alpha\) becomes large, while the zCDP bound is tight for small Rényi orders (i.e., \(\alpha\to1\)).
The smaller \(\varepsilon\) is, the later this transition occurs.</p>
<p><img src="/images/pdp2zcdp-purezcdp.png" width="700" alt="Plot showing the bound from Theorem 5 compared to rho=epsilon^2/2 and rho=epsilon." style="margin:auto;display: block;" /></p>
<p>This second plot compares the tight zCDP bound from Theorem 5 (solid <font color="magenta">magenta</font> line) against the trivial bound (dotted <font color="yellow">yellow</font> line) and the quadratic bound (dashed <font color="cyan">cyan</font> line).</p>
<p>We see that, for small values of \(\varepsilon\), the quadratic bound is tight, while for large values of \(\varepsilon\), the trivial bound is tight.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In this post, we have given improved bounds for converting from pure DP to Rényi DP and zCDP.
Numerically, these bounds are a modest improvement over the standard bounds.</p>
<p>The bounds are tight when the algorithm corresponds to randomized response.
However, in many cases we can prove better bounds for specific algorithms.
For example, in <a href="/exponential-mechanism-bounded-range">a previous post</a>, we proved better zCDP bounds for the exponential mechanism.</p>
<p>Another popular pure DP mechanism is Laplace noise addition. Mironov [<a href="https://arxiv.org/abs/1702.07476" title="Ilya Mironov. Rényi Differential Privacy. 2017.">M17</a>, Proposition 6] computed a tight Rényi DP bound specifically for the Laplace mechanism:
Adding Laplace noise with scale \(1/\varepsilon\) to a sensitivity-1 function guarantees \(\varepsilon\)-DP and also \((\alpha,\widehat\varepsilon_{\text{Lap}}(\alpha))\)-RDP for all \(\alpha>1\) and \[\widehat\varepsilon_{\text{Lap}}(\alpha) = \frac{1}{\alpha-1}\log\left( \frac{\alpha}{2\alpha-1} e^{(\alpha-1)\varepsilon} + \frac{\alpha-1}{2\alpha-1} e^{-\alpha\varepsilon} \right).\]</p>
<h3 id="acknowledgements">Acknowledgements</h3>
<p>Thanks to Damien Desfontaines for prompting this post.
To the best of my knowledge this improved conversion first appeared in <a href="https://x.com/yuxiangw_cs/status/1565765508950999041">a Tweet by Yu-Xiang Wang</a>.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>In general, we can replace \(\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]}\) with the Radon-Nikodym derivative of the probability distribution given by \(M(x)\) with respect to the probability distribution given by \(M(x’)\) evaluated at \(y\). If the output distributions do not have full support, we must handle division by zero; to do this we take \(\frac{0}{0} = 1\) and \(\frac{\eta}{0} = \infty\) for \(\eta>0\). <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>To be more precise, we have \[\underset{Y \gets M(x’)}{\mathbb{E}}\left[ \left( \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right)^\alpha \right] \le \underset{Y \gets M(x’)}{\mathbb{E}}\left[ \frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \right] \cdot \max_y \left( \frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} \right)^{\alpha-1} \le 1 \cdot \left( e^\varepsilon \right)^{\alpha-1},\] which yields the trivial conversion. Here we use Observation 2 from the proof of Theorem 4. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>This proof technique is due to Bun & Steinke [<a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. 2016.">BS16</a>, Proposition 3.3]. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Bun & Steinke [<a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. 2016.">BS16</a>, Proposition 3.3] first established this bound, although with a more involved proof. Earlier papers [<a href="https://guyrothblum.wordpress.com/wp-content/uploads/2014/11/drv10.pdf" title="Cynthia Dwork, Guy N. Rothblum, Salil Vadhan. Boosting and Differential Privacy. 2010.">DRV10</a>,<a href="https://arxiv.org/abs/1603.01887" title="Cynthia Dwork, Guy N. Rothblum. Concentrated Differential Privacy. 2016.">DR16</a>] proved slightly weaker bounds. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Thomas SteinkeMon, 27 May 2024 10:00:00 -0700
https://differentialprivacy.org/pdp-to-zcdp/
https://differentialprivacy.org/pdp-to-zcdp/NeurIPS 2023 Outstanding Paper: Privacy auditing in just one run<p>NeurIPS 2023 just wrapped up, and one of the two <a href="https://blog.neurips.cc/2023/12/11/announcing-the-neurips-2023-paper-awards/">outstanding paper awards</a> went to <a href="https://arxiv.org/abs/2305.08846">Privacy Auditing with One (1) Training Run</a>, by <a href="http://www.thomas-steinke.net/">Thomas Steinke</a>, <a href="https://scholar.google.com/citations?user=k6-nvDAAAAAJ">Milad Nasr</a>, and <a href="https://jagielski.github.io/">Matthew Jagielski</a>.
The main result of this paper is a method for auditing the (differential) privacy guarantees of an algorithm, but much faster and more practically than previous methods.
In this post, we’ll dive into what this all means.</p>
<p>In case you’re new to this: by now, it has been well established that ML models can leak information about their training data.
This has recently been demonstrated in a spectacular fashion for <a href="https://arxiv.org/abs/2012.07805">large language models</a> and <a href="https://arxiv.org/abs/2301.13188">diffusion models</a>, showing that these models are prone to <em>regurgitating</em> elements from their training dataset verbatim.
Beyond these models, training data leakage can occur to a variety of degrees in <a href="http://www.gautamkamath.com/CS860notes/lec1.pdf">other statistical settings</a>.
This can of course be problematic if the training data contains sensitive personal information that we do not wish to disclose.
It may may also be relevant to other adjacent considerations, including copyright infringement, which we don’t delve into here.</p>
<p>While there have been a number of heuristic proposals for how to deal with such problems, only one method has stood the test of time: differential privacy (DP).
Roughly speaking, an algorithm (e.g., a model’s training procedure) is differentially private if its output has limited dependence (in some precise sense) on any single datapoint.
This has many convenient implications: if a training procedure is differentially private, the resulting model is very unlikely to spit out training data, it is hard to predict whether a particular datapoint was in its training dataset, etc.
This strong notion of privacy has been adopted by a number of organizations, including <a href="https://arxiv.org/abs/2305.18465">Google</a>, <a href="https://arxiv.org/abs/1712.01524">Microsoft</a>, and the US Census Bureau in the <a href="https://arxiv.org/abs/2204.08986">2020 US Census</a>.
Differential privacy is a quantitative guarantee, parameterized by a value \(\varepsilon \geq 0\): the smaller \(\varepsilon\) is, the stronger the privacy protection (albeit at the cost of utility).</p>
<p>In order to say an algorithm is differentially private, we have to <em>prove</em> it.
By analyzing the algorithm, we obtain an <em>upper bound</em> on the value of \(\varepsilon\), i.e., a guarantee that the algorithm satsfies <em>at least</em> some prescribed level of privacy.
And we can be confident in this guarantee without running a single line of code!
A rich <a href="https://arxiv.org/abs/1607.00133">line</a> <a href="https://arxiv.org/abs/1908.10530">of</a> <a href="https://arxiv.org/abs/2106.02848">work</a> studies a differentially private analogue of stochastic gradient descent (which includes per-example gradient clipping followed by Gaussian noise addition), providing tighter and tighter upper bounds on the value of \(\varepsilon\).</p>
<p>Is there any way to empirically <em>audit</em> the privacy of an algorithm?
Provided a purportedly private procedure, is there an algorithm we can run to <em>lower bound</em> the value of \(\varepsilon\)?
This would discover that the procedure enjoys privacy no better than some particular level.
There’s many reasons one might want to audit an algorithm’s privacy guarantees:</p>
<ul>
<li>We can see if our privacy proof is tight: if we prove and audit matching values of \(\varepsilon\), then we know that neither can be improved.</li>
<li>We can see if our privacy proof is <em>wrong</em>: if we audit a value of \(\varepsilon\) that is <em>greater</em> than the value we prove, then we know there was a bug in our privacy proof.</li>
<li>If we’re unable to rigorously prove an algorithm is private, auditing gives some heuristic measure of how private the algorithm is (though this is not considered best practice in settings where privacy is paramount: auditing only lower bounds \(\varepsilon\), the true value may be much higher).</li>
</ul>
<p>There is a <a href="https://arxiv.org/abs/1902.08874">long</a> <a href="https://arxiv.org/abs/2006.07709">line</a> <a href="https://arxiv.org/abs/2101.04535">of</a> <a href="https://arxiv.org/abs/2302.07956">work</a> on this question from the perspective of <em>membership inference attacks</em>.
In a membership inference attack, we consider training a model on either a) some training dataset, or b) the same training dataset but with the inclusion of one extra datapoint (sometimes called a <em>canary</em>).
If we can correctly guess whether the canary was or was not in the training set, then we say the membership inference attack was successful.
However, recall that differential privacy limits the dependence on individual datapoints: if an algorithm is private, it means that membership inference attacks should not be very successful.
Conversely, if an attack <em>is</em> very successful, then it say the algorithm is quantitatively <em>not</em> so private.
In other words, such membership inference attacks serve as an auditing for the privacy of the algorithm.</p>
<p>An important technical point is that differential privacy is a <em>probabilistic</em> guarantee.
A single membership inference attack success or failure may happen by chance: in order to make conclusions about the privacy level of a procedure, we need to run the attack several times in order to estimate the <em>rate</em> of success.
Since for machine learning models, each attack corresponds to one training run, this can quickly result in prohibitive overheads.
As one extreme example, <a href="https://arxiv.org/abs/2202.12219">one work</a> trains 250,000 models to audit a proposed private training algorithm, revealing a bug in its privacy proof.
While these are small models (CNNs trained on MNIST), and the authors admit their auditing was overkill (they <em>only</em> needed to train 1,000 models), in modern settings, even a <em>single</em> extra training run is prohibitively expensive, thus rendering such privacy auditing methods impractical.</p>
<p>Here’s where the work of Steinke, Nasr, and Jagielski comes in: it performs privacy auditing with just one (1) training run.
This could even be the same as your actual training run, thus incurring minimal overhead with respect to the standard training pipeline.
Their method does this by randomly inserting <em>multiple</em> canaries into the dataset rather than just a single one, and privacy is audited by trying to guess which canaries were and were not trained on.
If one can correctly guess the status of many canaries, this implies that the procedure is not very private.
The analysis of this framework is the tricky part, and gets quite technical.
While textbook analysis of the addition/removal of multiple canaries would rely on a property of differential privacy known as “group privacy,” this turns out to be lossy.
Instead, the authors appeal to connections between differential privacy and generalization: they show that if you add multiple canaries i.i.d. for a single run, this behaves similarly to having multiple runs each with a single canary.</p>
<p>In short, this work is a breakthrough in privacy auditing.
It allows us to substantially reduce the computational overhead, from prohibitive to essentially negligible.
Up to this point, privacy auditing has mostly been employed by those with a surplus of compute: I’m excited to see how this work will make it more accessible to the GPU-poor.
Congratulations to Thomas, Milad, and Matthew on their fantastic result!</p>
Gautam KamathTue, 02 Jan 2024 12:00:00 -0400
https://differentialprivacy.org/neurips23-op/
https://differentialprivacy.org/neurips23-op/Open problem(s) - How generic can composition results be?<p>The composition theorem is a cornerstone of differential privacy literature.
In its most basic formulation, it states that if two mechanisms \(\mathcal{M}_1\) and \(\mathcal{M}_2\) are respectively \(\varepsilon_1\)-DP and \(\varepsilon_2\)-DP, then the mechanism \(\mathcal{M}\) defined by \(\mathcal{M}(D)=\left(\mathcal{M}_1(D),\mathcal{M}_2(D)\right)\) is \((\varepsilon_1+\varepsilon_2)\)-DP.
A large body of work focused on proving extensions of this composition theorem.
These extensions are of two kinds.</p>
<ul>
<li>Some composition results apply to different <em>settings</em> than fixed mechanisms.</li>
<li>Other extend known results to <em>variants</em> of differential privacy.</li>
</ul>
<p>In this blog post, we review existing results, and outline natural open questions appearing on both fronts.
We stumbled upon these open questions while building general-purpose differential privacy infrastructure, and we believe that solving them could have a positive impact on the usability and privacy/accuracy trade-offs provided by such tools.</p>
<h3 id="different-settings-for-composition">Different settings for composition</h3>
<p>First, let’s discuss what it means to compose two DP mechanisms.</p>
<h4 id="sequential-composition">Sequential composition</h4>
<p>In the original composition result [<a href="https://link.springer.com/chapter/10.1007/11681878_14">DMNS06</a>], all mechanisms \(\mathcal{M}_1\), \(\mathcal{M}_2\), etc., are fixed in advance, and have a predetermined privacy budget (resp. \(\varepsilon_1\), \(\varepsilon_2\), etc.).
They only take the sensitive data \(D\) as input: \(\mathcal{M}_2\) cannot see nor depend on \(\mathcal{M}_1(D)\).
This setting is typically called <em>sequential composition</em>.</p>
<p><img src="../images/sequential-composition.svg" width="80%" alt="A diagram representing sequential composition. A database icon is on the left. Arrows go from it to three boxes labeled M1, M2, and M3, each labeled with ε1, ε2, ε3; these ε values are labeled 'fixed budgets'." style="margin:auto;display: block;" /></p>
<h4 id="adaptive-composition">Adaptive composition</h4>
<p>Shortly afterwards, the result was extended to a setting called <em>adaptive composition</em> [<a href="https://link.springer.com/chapter/10.1007/11761679_29">DKMMN06</a>].
In this context, each mechanism can access the outputs of previous mechanisms: for example, \(\mathcal{M}_2\) takes as input not only the sensitive data \(D\), but also \(\mathcal{M}_1(D)\).
However, the privacy budget associated with each mechanism is still fixed.</p>
<p><img src="../images/adaptive-composition.svg" width="80%" alt="A diagram representing adaptive composition. It's the same diagram as sequential composition, except there are arrows going from M1 to M2, and from M2 to M3." style="margin:auto;display: block;" /></p>
<h4 id="fully-adaptive-composition">Fully adaptive composition</h4>
<p>A natural extension of adaptive composition consists in allowing the privacy budget of each mechanism to depend on previous outputs.
This setting is called <em>fully adaptive composition</em> [<a href="https://proceedings.neurips.cc/paper_files/paper/2016/hash/58c54802a9fb9526cd0923353a34a7ae-Abstract.html">RRUV16</a>].
It captures a setting in which a single analyst is interacting with a DP interface, and can change which queries to run and their budget based on past results.</p>
<p><img src="../images/fully-adaptive-composition.svg" width="80%" alt="A diagram representing fully adaptive composition. It's the same diagram as adaptive composition, except the 'fixed budgets' label is gone, and there are arrows going from M1 to ε2, and from M2 to ε3." style="margin:auto;display: block;" /></p>
<p>Composition theorems in the fully adaptive setting are of two types.</p>
<ul>
<li><em>Privacy filters</em> assume that the DP interface has a fixed, total budget, and will refuse to answer queries once that budget is exhausted.</li>
<li><em>Privacy odometers</em>, by contrast, allow the analyst to run arbitrarily many queries using as much budget as they want, and quantify the privacy loss over time.</li>
</ul>
<p>Somewhat surprisingly, there are separation results between both types: one can obtain tighter composition theorems with privacy filters than privacy odometers.</p>
<h4 id="concurrent-composition">Concurrent composition</h4>
<p>This is, however, not the end of the story.
Fully adaptive composition captures a setting in which a <em>single</em> analyst interacts with a DP interface.
What if <em>multiple</em> analysts have access to this interface, each with their own budget?
<em>Concurrent composition</em> [<a href="https://arxiv.org/abs/2105.14427">VW21</a>] captures this idea.
In this setting, the mechanisms that are being composed are <em>interactive</em> (we denote them by IM in the diagram below), and the analysts interacting with each mechanism can share results with each other, and adaptively decide which queries to run.
The goal is to quantify the total privacy budget cost, across analysts: do existing results extend to the composition of interactive mechanisms?</p>
<p><img src="../images/concurrent-composition.svg" width="80%" alt="A diagram representing concurrent composition. A database icon on the left has two-sided arrows going from two boxes labeled IM1 and IM2, respectively labeled ε1 and ε2. The first box has two pairs of arrows going back and forth between it and a smiley face. The second one has the same, with a different smiley face." style="margin:auto;display: block;" /></p>
<h4 id="fully-concurrent-composition">Fully concurrent composition?</h4>
<p>In concurrent composition as defined in [<a href="https://arxiv.org/abs/2105.14427">VW21</a>], the number of analysts and their respective privacy budget is fixed upfront.
This means that concurrent composition and fully adaptive composition results are incomparable.
This suggests an even more generic setting, which (to the best of our knowledge) has not been studied in the literature: a kind of concurrent composition, where the number of analysts and their budget is <em>not</em> predefined.
Let’s call this <em>fully concurrent composition</em>.
In this setting, an analyst with a certain privacy budget would be able to spin off a new interactive mechanism, with an adaptively-chosen privacy budget, that can also be interacted with concurrently.</p>
<p><img src="../images/fully-concurrent-composition.svg" width="80%" alt="A diagram representing fully concurrent composition. It's the same as the diagram for concurrent composition, except one of the pairs of arrows going to and from IM1 goes to a smaller box labeled IM3, labeled ε3, and there is also an arrow from IM1 to ε3. IM3 also has a pair of arrows going back and forth towards a third smiley face." style="margin:auto;display: block;" /></p>
<p>This setting might seem pointless — why would analysts want to do this? — but proving composition results in this context would help building DP interfaces that combine expressivity and conceptual simplicity.
To understand why, let’s take a look at how <a href="https://tmlt.dev">Tumult Analytics</a><sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> allows users to use its parallel composition feature.</p>
<p>Tumult Analytics has a concept of a <em>Session</em>, which is initialized on some sensitive data with a given privacy budget.
Users can submit queries to this Session using a query language implemented in Python.
Each query executed by the Session will consume part of the overall privacy budget, and return DP results.
The use can then examine these results to decide which queries to submit to the Session next, and with which privacy budget.
So far, this matches the fully adaptive setting, in its privacy filter formulation.</p>
<p>But Tumult Analytics also allows users to split their sensitive data depending on the value of an attribute, and perform different operations in each partition of the data.
With this feature, users can write algorithms that use <em>parallel composition</em>, which is very useful.
This partitioning operation takes a fraction of the privacy budget, and spins off <em>sub-Sessions</em> that each have access to a subset of the original data.
The following diagram visualizes an example of this process.</p>
<p><img src="../images/parallel-composition-analytics.svg" width="80%" alt="A diagram visualizing an example of parallel composition in Tumult Analytics. At the top is a database icon labeled 'Data'. A double-sided arrow goes from it to a box labeled 'Session 1, ε1 = 3'. Under this box is a differently-colored box labeled 'Parallel partitioning using ε2 = 1', three dotted-line arrows go through this box towards boxes labeled 'Session 2a, ε2 = 1', 'Session 2b, ε2 = 1', and 'Session 1, ε1 = 2'. Session 2a and 2b have arrows going to and from the database icon, cut in two parts (one for each Session)." style="margin:auto;display: block;" /></p>
<p>At the beginning, there is one Session with a privacy budget of \(\varepsilon_1=3\).
After the partitioning operation, there are now <em>three</em> Sessions: the original Session that has access to all the data and has a leftover privacy budget of \(\varepsilon_1=2\), and two sub-Sessions that each have access to a partition of the data and have a privacy budget of \(\varepsilon_2=1\).
The analyst using this interface can interact with any of these three Sessions, and interleave queries between each, in a fully interactive manner.
This means that even though there is a single user interacting with the data, the setting is similar to concurrent composition: each Session is an interactive object with a maximum privacy budget.
However, note that the privacy budget associated with each of the sub-Sessions could, in principle, depend on the result of past queries.
This suggests that we need composition results that take this into account, and capture the fully concurrent setting suggested above.</p>
<h3 id="composition-for-variants-of-differential-privacy">Composition for variants of differential privacy</h3>
<h4 id="existing-results-and-natural-questions">Existing results and natural questions</h4>
<p>A large number of variants and extensions of differential privacy have been proposed in the literature.
In many cases, a benefit of these alternative definitions is to improve the privacy analysis of mechanisms that compose a large number of simpler primitives.
For example, the \(n\)-fold composition of \(\varepsilon\)-DP mechanisms is \(n\varepsilon\)-DP, but the \(n\)-fold composition of \((\varepsilon,\delta)\)-DP mechanisms is also \((\varepsilon’,\delta’)\)-DP, with \(\varepsilon’\approx\sqrt{n}\varepsilon\) and \(\delta’\approx n\delta\).
Machine learning applications often use the moments accountant to perform privacy accounting, relying on the composition property of Rényi DP [<a href="https://ieeexplore.ieee.org/abstract/document/8049725">Mir17</a>, <a href="https://research.google/pubs/pub45428/">ACGMMTZ16</a>].
Gaussian DP and its generalization \(f\)-DP [<a href="https://academic.oup.com/jrsssb/article/84/1/3/7056089">DRS22</a>] are also used in this context [<a href="https://arxiv.org/abs/1911.11607">BDLS20</a>].
Meanwhile, statistical use cases using the Gaussian mechanism often use zero-concentrated DP [<a href="https://link.springer.com/chapter/10.1007/978-3-662-53641-4_24">BS16</a>] (zCDP) for their privacy analysis [<a href="https://desfontain.es/privacy/real-world-differential-privacy.html">Des21</a>]; the approximate version of this definition is also useful when queries are grouped by an unknown domain [<a href="https://arxiv.org/abs/2301.01998">SDH23</a>].</p>
<p>It is thus natural to study the composition of these variants under the settings described in the previous section.
For many variants and composition settings, <em>optimal</em> composition results have been proven.
We give an overview in the following table.</p>
<table>
<thead>
<tr>
<th> </th>
<th><strong>Sequential</strong></th>
<th><strong>Adaptive</strong></th>
<th><strong>Fully adaptive</strong></th>
<th><strong>Concurrent</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>\(\varepsilon\)-DP</td>
<td>[<a href="https://link.springer.com/chapter/10.1007/11681878_14">DMNS06</a>]</td>
<td>[<a href="https://link.springer.com/chapter/10.1007/11761679_29">DKMMN06</a>]</td>
<td>[<a href="https://proceedings.neurips.cc/paper_files/paper/2016/hash/58c54802a9fb9526cd0923353a34a7ae-Abstract.html">RRUV16</a>]</td>
<td>[<a href="https://arxiv.org/abs/2105.14427">VW21</a>]</td>
</tr>
<tr>
<td>\((\varepsilon,\delta)\)-DP</td>
<td>[<a href="https://proceedings.mlr.press/v37/kairouz15.html">KOV15</a>]</td>
<td>[<a href="https://proceedings.mlr.press/v37/kairouz15.html">KOV15</a>]</td>
<td>[<a href="https://proceedings.mlr.press/v202/whitehouse23a.html">WRRW22</a>]*</td>
<td>[<a href="https://proceedings.mlr.press/v202/whitehouse23a.html">WRRW22</a>, <a href="https://proceedings.neurips.cc/paper_files/paper/2022/hash/3f52b555967a95ee850fcecbd29ee52d-Abstract-Conference.html">Lyu22</a>]</td>
</tr>
<tr>
<td>Gaussian DP</td>
<td>[<a href="https://academic.oup.com/jrsssb/article/84/1/3/7056089">DRS22</a>]</td>
<td>[<a href="https://academic.oup.com/jrsssb/article/84/1/3/7056089">DRS22</a>]</td>
<td>[<a href="https://arxiv.org/abs/2210.17520">ST22</a>]</td>
<td>[<a href="https://arxiv.org/abs/2207.08335">VZ22</a>]</td>
</tr>
<tr>
<td>\(f\)-DP</td>
<td>[<a href="https://academic.oup.com/jrsssb/article/84/1/3/7056089">DRS22</a>]</td>
<td>[<a href="https://academic.oup.com/jrsssb/article/84/1/3/7056089">DRS22</a>]</td>
<td> </td>
<td>[<a href="https://arxiv.org/abs/2207.08335">VZ22</a>]</td>
</tr>
<tr>
<td>\((\alpha,\varepsilon)\)-Rényi DP</td>
<td>[<a href="https://ieeexplore.ieee.org/abstract/document/8049725">Mir17</a>]</td>
<td>[<a href="https://ieeexplore.ieee.org/abstract/document/8049725">Mir17</a>]</td>
<td>[<a href="https://proceedings.neurips.cc/paper/2021/hash/ec7f346604f518906d35ef0492709f78-Abstract.html">FZ21</a>]</td>
<td>[<a href="https://proceedings.neurips.cc/paper_files/paper/2022/hash/3f52b555967a95ee850fcecbd29ee52d-Abstract-Conference.html">Lyu22</a>]</td>
</tr>
<tr>
<td>\(\rho\)-zero-concentrated DP</td>
<td>[<a href="https://link.springer.com/chapter/10.1007/978-3-662-53641-4_24">BS16</a>]</td>
<td>[<a href="https://link.springer.com/chapter/10.1007/978-3-662-53641-4_24">BS16</a>]</td>
<td>[<a href="https://proceedings.neurips.cc/paper/2021/hash/ec7f346604f518906d35ef0492709f78-Abstract.html">FZ21</a>]</td>
<td>[<a href="https://proceedings.neurips.cc/paper_files/paper/2022/hash/3f52b555967a95ee850fcecbd29ee52d-Abstract-Conference.html">Lyu22</a>]</td>
</tr>
<tr>
<td>\(\delta\)-approx. \(\rho\)-zCDP</td>
<td>[<a href="https://link.springer.com/chapter/10.1007/978-3-662-53641-4_24">BS16</a>]</td>
<td>[<a href="https://link.springer.com/chapter/10.1007/978-3-662-53641-4_24">BS16</a>]</td>
<td>[<a href="https://proceedings.mlr.press/v202/whitehouse23a.html">WRRW22</a>]</td>
<td> </td>
</tr>
</tbody>
</table>
<center><small>
* Only asymptotically optimal for small ε.
</small></center>
<p>This summary already suggests a few natural open questions: it is not known whether the fully adaptive composition results for \((\varepsilon,\delta)\)-DP can be improved, there is no fully adaptive composition theorem for \(f\)-DP, or concurrent for \((\rho,\delta)\)-approximate zCDP.</p>
<h4 id="reordering-mechanisms-during-the-privacy-analysis">Reordering mechanisms during the privacy analysis</h4>
<p>Let’s assume for a moment that the table above is completed, and that we have optimal composition theorems for all the variants of interest and all settings.
Consider an analyst using a differential privacy framework, and performing multiple operations in a fully adaptive way.
Some of these operations are using \(\rho\)-zCDP, others are \((\varepsilon,\delta)\)-DP, alternatively, with varying parameters.
How should the privacy accounting be done in such a scenario?</p>
<p>In the context of sequential composition, it would be natural to <em>reorder</em> those mechanisms: consider the equivalent situation where all \(\rho\)-zCDP mechanisms occur first, and all \((\varepsilon,\delta)\)-DP mechanisms occur afterwards.
In this setting, the zCDP mechanisms can be first be composed using the zCDP composition rule.
The overall zCDP guarantee can then be converted to \((\varepsilon,\delta)\)-DP, and composed with the other \((\varepsilon,\delta)\)-DP guarantees.
This will lead to a tighter privacy analysis than converting every individual \(\rho\)-zCDP mechanism to \((\varepsilon,\delta)\)-DP, and composing those guarantees.</p>
<p>However, we would need an additional theoretical result to perform this kind of reordering operation in a fully adaptive context: the fact that composition results exist for \((\varepsilon,\delta)\)-DP and \(\rho\)-zCDP does not mean they can be combined.
How to resolve this problem, and make it possible to use the same privacy accounting techniques in the sequential setting and in the fully adaptive or fully concurrent setting?
This leads to a natural open question: when performing the privacy analysis of a privacy filter, can one “reorder” the mechanisms when composing them?
Answering this positively would allow DP frameworks to implement tighter privacy accounting at a relatively low cost in complexity.
It might very well be that the answer to this open question is negative.
In that case, proving such a separation result would be of significant theoretical interest in the study of DP composition.</p>
<h4 id="composing-privacy-loss-distributions">Composing privacy loss distributions</h4>
<p>When we say that a mechanism is \((\varepsilon,\delta)\)-DP, or \(\rho\)-zCDP, we are giving a “global” bound on the privacy loss random variable, defined by:
\[
\mathcal{L}_{D,D’}(o) =
\ln\left(\frac{\mathbb{P}\left[\mathcal{M}(D)=o\right]}{\mathbb{P}\left[\mathcal{M}(D’)=o\right]}\right)
\]
for all neighboring inputs \(D\) and \(D’\).</p>
<p>An alternative approach to privacy accounting consists in <em>fully</em> describing this random variable.
One approach to do this uses the formalism of <em>privacy loss distributions</em> (PLDs) [<a href="https://petsymposium.org/popets/2019/popets-2019-0029.php">SMM18</a>].
The PLD of a mechanism is defined as:
\[
\omega(y) = \mathbb{P}_{o\sim\mathcal{M}(D)}\left[\mathcal{L}_{D,D’}(o)=y\right].
\]</p>
<p>In the sequential composition setting, PLDs can be used for tight privacy analysis.
This relies on a conceptually simple result: if \(\omega\) is the PLD of \(\mathcal{M}\) and \(\omega’\) is the PLD of \(\mathcal{M}’\) on neighboring databases \(D\), \(D’\), then the PLD of the composition of \(\mathcal{M}\) and \(\mathcal{M}’\) is \(\omega\ast\omega’\), where \(\ast\) is the convolution operator.
Of course, when doing privacy accounting, we don’t want \(\omega\) and \(\omega’\) to depend on the pair of databases, so we replace them by <em>worst-case</em> PLDs, that are “larger” than all possible PLDs for neighboring databases.</p>
<p>Using PLDs for privacy accounting can be done numerically [<a href="https://eprint.iacr.org/2017/1034">MM18</a>, <a href="https://arxiv.org/abs/2102.12412">KJH20</a>, <a href="http://proceedings.mlr.press/v130/koskela21a.html">KJPH21</a>, <a href="https://proceedings.neurips.cc/paper_files/paper/2021/hash/6097d8f3714205740f30debe1166744e-Abstract.html">GLW21</a>, <a href="https://proceedings.mlr.press/v162/ghazi22a.html">GKKM22</a>, <a href="https://arxiv.org/abs/2207.04380">DGKKM22</a>] or analytically [<a href="https://proceedings.mlr.press/v151/zhu22c.html">ZDW22</a>].
This family of approaches is convenient because it is very generic: DP frameworks can use a tight upper bound PLD when known, and fall back to a worst-case PLD corresponding to \(\varepsilon\)-DP or \((\varepsilon,\delta)\)-DP when the mechanism is too complex.
Unfortunately, the composition result mentioned above has only been proven in the sequential composition setting [<a href="https://eprint.iacr.org/2017/1034">MM18</a>].
Extending it to adaptive composition is straightforward, but extending it to the fully adaptive setting (with privacy filters) or the concurrent setting does not seem trivial.</p>
<p>This leads us to our last open question: can these privacy accounting techniques be used in the fully adaptive or concurrent settings?</p>
<h3 id="summary">Summary</h3>
<p>In this blog post, we gave a high-level overview of different settings and variants of composition theorems.
Along the way, we listed a number of natural open questions.</p>
<ol>
<li>Can we define a setting that generalizes both fully adaptive composition and concurrent composition? What composition results hold in that setting?</li>
<li>Can we “fill in the blanks” among existing composition results? Namely, can we prove optimal composition results for \((\varepsilon,\delta)\)-DP and \(f\)-DP in the fully adaptive setting, and for \((\varepsilon,\delta)\)-approximate zCDP in the concurrent setting?</li>
<li>In the fully adaptive setting with privacy filters, can one reorder mechanisms when computing their cumulative privacy loss, to optimize the privacy accounting?</li>
<li>Can we prove fully adaptive and concurrent composition results for privacy accounting based on privacy loss distributions?</li>
</ol>
<p>Progress on these open questions would either uncover surprising additional separation results, or enable usability and utility improvements to general-purpose DP infrastructure.
We’re excited about both prospects!</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p><a href="https://tmlt.dev">Tumult Analytics</a> is a differential privacy framework used by institutions such as the U.S. Census Bureau, the IRS, or the Wikimedia Foundation. It is developed by <a href="https://tmlt.io">Tumult Labs</a>, the employer of the author of this blog post. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Damien DesfontainesMon, 18 Sep 2023 21:00:00 -0400
https://differentialprivacy.org/open-problems-how-generic-can-composition-be/
https://differentialprivacy.org/open-problems-how-generic-can-composition-be/Beyond Local Sensitivity via Down Sensitivity<p>In <a href="/inverse-sensitivity/">our previous post</a>, we discussed local sensitivity and how we can get accuracy guarantees that scale with local sensitivity, which can be much better than the global sensitivity guarantees attained via standard noise addition mechanisms.
In this post, we will look at what we can do when even the local sensitivity is unbounded. This is obviously a challenging setting, but it turns out that not all hope is lost.</p>
<p>As a motivating example, suppose we have a dataset \(x=(x_1,x_2,\cdots,x_n)\) and we want to approximate \(\max_i x_i \) in a differentially private manner.
The difficulty is that adding a single element to \(x\) can increase the maximum arbitrarily. That is, if \(x’=(x_1,x_2,\cdots,x_n,\infty)\), then \(\max_i x’_i=\infty\). Differential privacy requires us to make the outputs \(M(x)\) and \(M(x’)\) indistinguishable, which seems to directly contradict our accuracy goal \(M(x) \approx \max_i x_i\).</p>
<p>One solution to the problem of unbounded sensitivity is to clip the inputs, so that the sensitivity becomes bounded. But this requires knowing a good a priori approximate upper bound on the \(x_i\)s. Trying to find such an upper bound is probably the very reason we want to approximate the maximum in the first place!</p>
<p>Another solution is to “aim lower:” Instead of aiming to approximate the largest element \(x_{(n)} := \max_i x_i\), we can aim to approximate the \(k\)-th largest element \(x_{(n-k+1)}\).
The \(k\)-th largest element has bounded local sensitivity, which means we can apply <a href="/inverse-sensitivity/">the inverse sensitivity mechanism</a> or similar tools.
And – spoiler alert – this is essentially what we will do. However, we will present an algorithm that is more general than just for approximating the maximum.</p>
<p>The algorithm we present is due to Fang, Dong, and Yi [<a href="https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf" title="Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022.">FDY22</a>].
In terms of applications, a natural setting where we may need to approximate functions of unbouned local sensitivity is when each person can contribute multiple items to the dataset. This setting is often referred to as “user-level differential privacy” or “user DP.”<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>
For example, if we have a collection of web browsing histories, we may wish to estimate the total number of webpages visited; this has unbounded local sensitivity because a single person could visit an arbitrary number of webpages.</p>
<h2 id="down-sensitivity">Down Sensitivity</h2>
<p>Observe that, while <em>adding</em> one element to the input can increase the maximum arbitrarily, <em>removing</em> one element can only decrease it by the gap between the largest and second-largest elements \(x_{(n)}-x_{(n-1)}\). In other words, the maximum satisfies some kind of one-sided local sensitivity bound. This is the general property we will rely on.</p>
<p>We define the \(k\)-<em>down sensitivity</em><sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> of the function \(f : \mathcal{X}^* \to \mathbb{R}\) at the input \(x\in\mathcal{X}^*\) as
<a id="downsensitivity"></a>\[\mathsf{DS}^k_f(x) := \sup_{x’ \subseteq x : \mathrm{dist}(x,x’) \le k} |f(x)-f(x’)|. \tag{1}\]
Here \(\mathrm{dist} : \mathcal{X}^* \times \mathcal{X}^* \to \mathbb{R}\) is the size of the symmetric difference between the two input tuples/multisets \(\mathrm{dist}(x,x’) = |x \setminus x’| + | x’ \setminus x |\), which defines a metric. In other words, it measures how many people’s data must be added or removed to get from one dataset to the other.
For comparison, the local sensitivity is
<a id="localsensitivity"></a>\[\mathsf{LS}^k_f(x) := \sup_{x’\in\mathcal{X}^* : \mathrm{dist}(x,x’) \le k} |f(x)-f(x’)|. \tag{2}\]
The difference between Equations 1 and 2 is simply that down sensitivity only considers removing elements from \(x\), while local sensitivity considers both addition and removal.
Thus, the down sensitivity is at most the local sensitivity, which is, in turn, upper bounded by the global sensitivity: \(\mathsf{DS}^k_f(x) \le \mathsf{LS}^k_f(x) \le k \cdot \mathsf{GS}_f\).</p>
<p>Intuitively, what is nice about down sensitivity is that it only considers the actual data we have at hand. It doesn’t consider any hypothetical people’s data that could be added to the dataset. It is appealing to only have to deal with “real” data.</p>
<p>Our goal now is to estimate \(f(x)\) in a differentially private manner, where the accuracy guarantee scales with the down sensitivity.</p>
<h2 id="monotonicity-assumption">Monotonicity Assumption</h2>
<p>In order to do anything, we need some assumptions about the function \(f : \mathcal{X}^* \to \mathcal{Y}\) that we are trying to approximate.
First we will assume that \(\mathcal{Y} \subseteq \mathbb{R}\) is finite and \(f\) is surjective.<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup>
The main assumption is monotonicity:
<a id="monotonicity"></a>\[\forall x’ \subseteq x \in \mathcal{X}^* ~~~ f(x’) \le f(x). \tag{3}\]
The maximum and many other example functions satisfy this assumption.</p>
<p>Intuitively, we need this assumption to ensure that the down sensitivity is well-behaved.
Specifically, Lemma 1 below requires monotonicity.</p>
<p>As an example of <a id="weirdnonmonotonicity"></a>what could happen if we don’t make this assumption, consider the function \(\mathrm{sum}(x) := \sum_i x_i\) and the pair of neighbouring inputs \(x=(1,1,\cdots,1)\in\mathcal{Y}^n,x’=(1,1,\cdots,1,-100n)\in\mathcal{Y}^{n+1}\). Then, for all \(1 \le k\le n\), we have \(\mathsf{DS}_{\mathrm{sum}}^k(x)=k\), but \(\mathsf{DS}_{\mathrm{sum}}^k(x’)=100n\).</p>
<p>Note that the sum is monotone if we restrict to non-negative inputs. In general, we can take any function \(g\) and convert it into a monotone function \(f\) by defining \(f(x) = \max\{ g(\check{x}) : \check{x} \subseteq x \}\). Depending on the context, this \(f\) may or may not be a good proxy for \(g\).</p>
<h2 id="a-loss-with-bounded-global-sensitivity">A Loss With Bounded Global Sensitivity</h2>
<p>Given a monotone function \(f : \mathcal{X}^* \to \mathbb{R}\), we define a loss function \(\ell : \mathcal{X}^* \times \mathbb{R} \to \mathbb{Z}_{\ge 0}\) by
<a id="loss"></a>\[\ell(x,y) := \min\{ \mathrm{dist}(x,\tilde{x}) : \tilde{x} \subseteq x, f(\tilde{x}) \le y \}. \tag{4}\]
In other words, \(\ell(x,y)\) measures how many entries of \(x\) we need to remove to decrease the function value until \(f(x) \le y\).
Yet another way to think of it is that \(\ell(x,y)\) is the distance from the point \(x\) to the set \(f^{-1}((-\infty,y]) \cap \{ \tilde{x} : \tilde{x} \subseteq x \} \).</p>
<table>
<tbody>
<tr>
<td><img src="../images/shiftedinverseloss.png" alt="Plot of the loss corresponding to the maximum where the dataset exactly matches Binomial(5,0.5). This is a decreasing function with steps. There is also a vertical line indicating the true maximum value." /> Figure 1: Visualization of the loss \(\ell(x,y)\) corresponding to \(f(x)=\max_i x_i\) for a dataset representing the distribution \(\mathrm{Binomial}(5,1/2)\) i.e. the true maximum is \(5\) and the dataset is \(x=(0,\underbrace{1,1,1,1,1}_{5\times},\underbrace{2,2,\cdots,2}_{10\times},\underbrace{3,3,\cdots,3}_{10\times},\underbrace{4,4,4,4,4}_{5\times},5)\).</td>
</tr>
</tbody>
</table>
<p>The key property we need is that this loss has bounded sensitivity. We split the proof into Lemmas 1 and 2.</p>
<blockquote>
<p><strong>Lemma 1.</strong>
Let \(f : \mathcal{X}^* \to \mathbb{R}\) satisfy the monotonicity property in <a href="#monotonicity">Equation 3</a>.
Define \(\ell : \mathcal{X}^* \times \mathbb{R} \to \mathbb{Z}_{\ge 0}\) as in <a href="#loss">Equation 4</a>. <br />
Let \(x’ \subseteq x \in \mathcal{X}^*\).
Then \(\ell(x’,y)\le\ell(x,y)\) for all \(y \in \mathbb{R}\).</p>
</blockquote>
<blockquote>
<p><em>Proof.</em>
Fix \(y \in \mathbb{R}\) and \(x’ \subseteq x \in \mathcal{X}^*\).
Let \(x_\Delta = x \setminus x’ \subseteq x\), so that \(x’ = x \setminus x_\Delta \).</p>
<p>Let \(\widehat{x} \subseteq x\) satisfy \(f(\widehat{x})\le y\) and \(\mathrm{dist}(x,\widehat{x})=\ell(x,y)\).
Define \(\widehat{x}’ = \widehat{x} \setminus x_\Delta\). This ensures \(\widehat{x}’ \subseteq x’\) and \[\mathrm{dist}(x’,\widehat{x}’) = \mathrm{dist}(x \setminus x_\Delta , \widehat{x} \setminus x_\Delta ) \le \mathrm{dist}(x,\widehat{x}).\]</p>
<p>By monotonicity, \(f(\widehat{x}’) \le f(\widehat{x}) \le y\).
Thus \[\ell(x’,y) = \min\{ \mathrm{dist}(x’,\tilde{x}’) : \tilde{x}’ \subseteq x’, f(\tilde{x}’) \le y \}\]\[ \le \mathrm{dist}(x’,\widehat{x}’) \le \mathrm{dist}(x,\widehat{x}) = \ell(x,y).\]
∎</p>
</blockquote>
<blockquote>
<p><strong>Lemma 2.</strong>
Let \(f : \mathcal{X}^* \to \mathbb{R}\).
Define \(\ell : \mathcal{X}^* \times \mathbb{R} \to \mathbb{Z}_{\ge 0}\) as in <a href="#loss">Equation 4</a>. <br />
Let \(x’ \subseteq x \in \mathcal{X}^*\).
Then \(\ell(x,y)\le\ell(x’,y)+\mathrm{dist}(x,x’)\) for all \(y \in \mathbb{R}\).</p>
</blockquote>
<blockquote>
<p><em>Proof.</em>
Fix \(y \in \mathbb{R}\) and \(x’ \subseteq x \in \mathcal{X}^*\).</p>
<p>Let \(\widehat{x}’ \subseteq x’\) satisfy \(f(\widehat{x}’)\le y\) and \(\mathrm{dist}(x’,\widehat{x}’)=\ell(x’,y)\).
Since \(\widehat{x}’ \subseteq x’ \subseteq x\), we have
\[\ell(x,y) = \min\{ \mathrm{dist}(x,\tilde{x}) : \tilde{x} \subseteq x, f(\tilde{x}) \le y \} \le \mathrm{dist}(x,\widehat{x}’) \]\[ \le \mathrm{dist}(x,x’) + \mathrm{dist}(x’,\widehat{x}’) = \ell(x,y)+\mathrm{dist}(x,x’),\]
by the triangle inequality, as required.
∎</p>
</blockquote>
<p>Note that we only needed the monotonicity assumption for Lemma 1.
Combining the two lemmas gives \[ \forall x’ \subseteq x ~ \forall y ~~~~~ \ell(x’,y) \le \ell(x,y) \le \ell(x’,y) + \mathrm{dist}(x,x’).\]
Overall we have the following guarantee.</p>
<blockquote>
<p><strong>Proposition 3. (Global Sensitivity of the Loss)</strong>
Let \(f : \mathcal{X}^* \to \mathbb{R}\) satisfy the monotonicity property in <a href="#monotonicity">Equation 3</a>.
Define \(\ell : \mathcal{X}^* \times \mathbb{R} \to \mathbb{Z}_{\ge 0}\) as in <a href="#loss">Equation 4</a>. <br />
Then, for all \(x, x’ \in \mathcal{X}^*\) and all \(y \in \mathbb{R}\), we have \[|\ell(x,y)-\ell(x’,y)| \le \mathrm{dist}(x,x’).\]</p>
</blockquote>
<blockquote>
<p><em>Proof.</em>
Fix \(x, x’ \in \mathcal{X}^*\) and \(y \in \mathbb{R}\).
Let \(x’’ = x \cap x’\).
Since \(x’’ \subset x’\) and \(f\) is assumed to be monotone, Lemma 1 gives \(\ell(x’’ ,y) \le \ell(x’,y)\).
Also \(x’’ \subset x\), whence Lemma 2 gives \(\ell(x,y) \le \ell(x’’ , y) + \mathrm{dist}(x , x’’ )\).
Note that \( \mathrm{dist}(x , x’’ ) = | x \setminus x’’ | = | x \setminus x’ | \le \mathrm{dist}(x , x’ ).\)
Combining inequalities gives \(\ell(x,y) \le \ell(x’ , y) + \mathrm{dist}(x , x’ )\). The other direction is symmetric.
∎</p>
</blockquote>
<h2 id="the-shifted-inverse-sensitivity-mechanism">The Shifted Inverse Sensitivity Mechanism</h2>
<p>Let’s recap where we are: We have a monotone function \(f : \mathcal{X}^* \to \mathcal{Y}\), where \(\mathcal{Y} \subseteq \mathbb{R}\) is finite. We want to approximate \(f(x)\) privately. <a href="#loss">Equation 4</a> gives us a loss \(\ell\) that is low-sensitivity.
We have \(\ell(x,f(x))=0\) and, if \(y < f(x)\) decreases, the loss \(\ell(x,y)\) increases (depending on the down sensitivity of \(f\)).
So far, so good. The problem is that if \(y > f(x)\) increases, the loss \(\ell(x,y)\) doesn’t increase. This means we can’t just throw this loss into the exponential mechanism.</p>
<p>Intuitively, the way we get around this problem is by looking for a value \(y\) such that the loss \(\ell(x,y)\) is greater than zero, but not too large. That is, we “shift” our goal from trying to minimize \(\ell(x,y)\) to minimizing something like \(|\ell(x,y)-\tau|\) for some integer \(\tau>0\).
Going back to the example of the maximum, this corresponds to aiming for the \((\tau+1)\)-th largest value instead of the largest value.
The hope is that we get an output with \(|\ell(x,y)-\tau|<\tau\), which for the maximum example corresponds roughly to getting a value between the largest value and the \(2\tau\)-th largest value.</p>
<p>Fang, Dong, and Yi [<a href="https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf" title="Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022.">FDY22</a>] directly apply the exponential mechanism [<a href="https://ieeexplore.ieee.org/document/4389483" title="Frank McSherry, Kunal Talwar. Mechanism Design via Differential Privacy. FOCS 2007.">MT07</a>] with a loss of the form \(|\ell(x,y)-\tau|\).<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup>
This yields the following guarantee.</p>
<blockquote>
<p><strong>Theorem 4. (Shifted Inverse Sensitivity Mechanism)</strong>
Let \(f : \mathcal{X}^* \to \mathcal{Y}\) be monotone (<a href="#monotonicity">Equation 3</a>), where \(\mathcal{Y} \subseteq \mathbb{R}\) is finite. Let \(\varepsilon>0\) and \(\beta \in (0,1)\).
Then there exists an \(\varepsilon\)-differentially private \(M : \mathcal{X}^* \to \mathcal{Y}\) with the following accuracy guarantee.
For all \(x \in \mathcal{X}^*\), we have
\[\mathbb{P}\left[ f(x) \ge M(x) \ge f(x) - \mathsf{DS}_f^{2\tau}(x) \right] \ge 1 - \beta,\]
where \(\tau=\left\lceil\frac{2}{\varepsilon}\log\left(\frac{|\mathcal{Y}|}{\beta}\right)\right\rceil\).</p>
</blockquote>
<p>This is exactly the kind of guarantee we were aiming for; the accuracy scales with the down sensitivity, which could be much smaller than either the local sensitivity or the global sensitivity.
Note that the guarantee gives an <i>under</i>estimate: \(M(x) \le f(x)\). This is inherent. If the function has infinite “up sensitivity,” then we cannot give an upper bound in a differentially private manner.</p>
<p>The shifted inverse sensitivity mechanism has the same limitations as the inverse sensitivity mechanism that we discussed in <a href="/inverse-sensitivity/">our previous post</a>. Namely, computing the loss can be computationally intractable for general functions and we have a \(\log|\mathcal{Y}|\) dependence. (We will discuss how to improve this next.)
An additional limitation is that we need the monotonicity assumption. But, as <a href="#weirdnonmonotonicity">discussed earlier</a>, down sensitivity behaves weirdly without this assumption.</p>
<h2 id="beyond-the-exponential-mechanism">Beyond the Exponential Mechanism</h2>
<p>Applying the exponential mechanism to find \(y\) with \(\ell(x,y)\approx\tau\) yields a clean guarantee in Theorem 4. However, there are other methods we can apply which may be simpler<sup id="fnref:4:1" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup> and give better asymptotic guarantees.</p>
<p>Observe that the loss \(\ell(x,y)\) is a decreasing function of \(y\). The exponential mechanism does not exploit this structure.
A very natural alternative algorithm is to perform binary search.<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">5</a></sup></p>
<p>We describe the algorithm in pseudocode and briefly analyze it: The input is the loss \(\ell\) defined in <a href="#loss">Equation 4</a>, the dataset \(x\), an ordered enumeration of the set of outputs \(\mathcal{Y} = \{y_0 \le y_1 \le \cdots \le y_{|\mathcal{Y}|-1} \}\), and parameters \(\sigma,\tau>0\).</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">def</span> <span class="nf">noisy_binary_search</span><span class="p">(</span><span class="n">loss</span><span class="p">,</span> <span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">,</span> <span class="n">sigma</span><span class="p">,</span> <span class="n">tau</span><span class="p">):</span>
<span class="n">i_min</span> <span class="o">=</span> <span class="mi">0</span>
<span class="n">i_max</span> <span class="o">=</span> <span class="nb">len</span><span class="p">(</span><span class="n">Y</span><span class="p">)</span> <span class="o">-</span> <span class="mi">1</span>
<span class="k">while</span> <span class="n">i_min</span> <span class="o">+</span> <span class="mi">1</span> <span class="o"><</span> <span class="n">i_max</span><span class="p">:</span>
<span class="n">k</span> <span class="o">=</span> <span class="p">(</span><span class="n">i_min</span> <span class="o">+</span> <span class="n">i_max</span><span class="p">)</span> <span class="o">//</span> <span class="mi">2</span>
<span class="n">v</span> <span class="o">=</span> <span class="n">loss</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">Y</span><span class="p">[</span><span class="n">k</span><span class="p">])</span> <span class="o">+</span> <span class="n">laplace</span><span class="p">(</span><span class="n">sigma</span><span class="p">)</span>
<span class="k">if</span> <span class="n">v</span> <span class="o"><=</span> <span class="n">tau</span><span class="p">:</span>
<span class="n">i_max</span> <span class="o">=</span> <span class="n">k</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">i_min</span> <span class="o">=</span> <span class="n">k</span>
<span class="k">return</span> <span class="n">Y</span><span class="p">[</span><span class="n">i_max</span><span class="p">]</span>
</code></pre></div></div>
<p>Since each iteration satisfies \(\frac1\sigma\)-differential privacy and there are at most \(\lceil \log_2 |\mathcal{Y}| \rceil-1\) iterations, the algorithm satisfies \(\varepsilon\)-differential privacy for \(\varepsilon = \frac{\log_2 |\mathcal{Y}|}{\sigma} \) by <a href="/composition-basics/">basic composition</a>.
Alternatively, using advanced composition, we see that the algorithm satisfies \(\rho\)-zCDP [<a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. TCC 2016.">BS16</a>] for \(\rho = \frac{\log_2 |\mathcal{Y}|}{2\sigma^2} \).</p>
<p>By a union bound, each noise sample has magnitude at most \(\tau\) with probability at least \(1 - \exp(-\tau/\sigma) \cdot \log_2|\mathcal{Y}|\).<sup id="fnref:b" role="doc-noteref"><a href="#fn:b" class="footnote" rel="footnote">6</a></sup>
Assuming the noise magnitudes are \(\le\tau\), the binary search maintains the invariants \(\ell(x,y_{i_\min})>0\) and \(\ell(x,y_{i_\max})\le 2\tau\).
These invariants imply \(y_{i_\min} < f(x)\) and \(y_{i_\max} \ge f(x) - \mathsf{DS}_f^{2\tau}(x)\) respectively.
At the end of the binary search, \(i_\min+1 \ge i_\max\) and thus \(y_{i_\min} < f(x)\) implies \(y_{i_\max} \le f(x)\).</p>
<p>Setting \(\tau = \sigma \cdot \log\left(\frac{\log_2|\mathcal{Y}|}{\beta}\right)\) and \(\sigma = \frac{\log_2|\mathcal{Y}|}{\varepsilon}\) yields a result similar to Theorem 4.</p>
<p>Setting \(\tau = \sigma \cdot \log\left(\frac{\log_2|\mathcal{Y}|}{\beta}\right)\) and \(\sigma = \sqrt{\frac{\log_2|\mathcal{Y}|}{2\rho}}\) yields the following result for concentrated differential privacy [<a href="https://arxiv.org/abs/1603.01887" title="Cynthia Dwork, Guy N. Rothblum. Concentrated Differential Privacy. 2016.">DR16</a>,<a href="https://arxiv.org/abs/1605.02065" title="Mark Bun, Thomas Steinke. Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds. TCC 2016.">BS16</a>].
Note that setting \(\rho = \frac{\varepsilon^2}{4\log(1/\delta)+4\varepsilon}\) suffices to give \((\varepsilon,\delta)\)-differential privacy [e.g. <a href="https://arxiv.org/abs/2210.00597v4" title="Thomas Steinke. Composition of Differential Privacy & Privacy Amplification by Subsampling. 2022.">S22</a> Remark 15].</p>
<blockquote>
<p><strong>Theorem 5. (Shifted Inverse Sensitivity Mechanism with Concentrated Differential Privacy)</strong>
Let \(f : \mathcal{X}^* \to \mathcal{Y}\) be monotone (<a href="#monotonicity">Equation 3</a>), where \(\mathcal{Y} \subseteq \mathbb{R}\) is finite. Let \(\rho>0\) and \(\beta \in (0,1)\).
Then there exists an \(\rho\)-zCDP \(M : \mathcal{X}^* \to \mathcal{Y}\) with the following accuracy guarantee.
For all \(x \in \mathcal{X}^*\), we have
\[\mathbb{P}\left[ f(x) \ge M(x) \ge f(x) - \mathsf{DS}_f^{2\tau}(x) \right] \ge 1 - \beta,\]
where \(\tau = \sqrt{\frac{\log_2|\mathcal{Y}|}{2\rho}} \cdot \log\left(\frac{\log_2|\mathcal{Y}|}{\beta}\right) \).</p>
</blockquote>
<p>Comparing Theorems 4 and 5 we see an asymptotic improvement in the dependence on the size of the output space \(|\mathcal{Y}|\). (This improvement is the benefit of advanced composition.) Theorem 4 gives \(\tau = \Theta(\log|\mathcal{Y}|)\), while Theorem 5 gives \(\tau = \Theta(\sqrt{\log|\mathcal{Y}|} \cdot \log \log |\mathcal{Y}|)\).<sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="footnote" rel="footnote">7</a></sup>
In exchange, Theorem 4 gives a pure differential privacy guarantee (i.e. \((\varepsilon,\delta)\)-DP with \(\delta=0\)), while Theorem 5 gives a concentrated differential privacy guarantee, which can be translated to approximate differential privacy (i.e. \((\varepsilon,\delta)\)-DP with \(\delta>0\)).</p>
<p>We can actually do even better than binary search!
The problem we’re solving with binary search is actually an instance of the <em>generalized interior point problem</em> [<a href="http://www.thomas-steinke.net/tcdp.pdf" title="Mark Bun, Cynthia Dwork, Guy N. Rothblum, Thomas Steinke. Composable and Versatile Privacy via Truncated CDP. STOC 2018.">BDRS18</a>] (which is essentially the same as <em>quasi-concave optimization</em> [<a href="https://arxiv.org/abs/2211.06387" title="Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Uri Stemmer. Õptimal Differentially Private Learning of Thresholds and Quasi-Concave Optimization. STOC 2023.">CLNSS23</a>]).
This problem and its variants have been extensively studied in the context of private learning [<a href="https://arxiv.org/abs/1407.2674" title="Amos Beimel, Kobbi Nissim, Uri Stemmer. Private Learning and Sanitization: Pure vs. Approximate Differential Privacy. APPROX/RANDOm 2013.">BNS13</a>,<a href="https://arxiv.org/abs/1504.07553" title="Mark Bun, Kobbi Nissim, Uri Stemmer, Salil Vadhan. Differentially Private Release and Learning of Threshold Functions. FOCS 2015.">BNSV15</a>,etc.]
The upshot is that, under \((\varepsilon,\delta)\)-differential privacy, we can achieve the same result as Theorems 4 and 5 with \(\tau = \frac{\log(1/\delta)}{\varepsilon} \cdot 2^{O(\log^* |\mathcal{Y}|)}\), where \(\log^*\) denotes the <a href="https://en.wikipedia.org/wiki/Iterated_logarithm">iterated logaritm</a>.</p>
<blockquote>
<p><strong>Theorem 6. (Shifted Inverse Sensitivity Mechanism with Approximate Differential Privacy)</strong>
Let \(f : \mathcal{X}^* \to \mathcal{Y}\) be monotone (<a href="#monotonicity">Equation 3</a>), where \(\mathcal{Y} \subseteq \mathbb{R}\) is finite. Let \(\varepsilon>0\) and \(\delta \in (0,.1)\).
Then there exists an \((\varepsilon,\delta)\)-differentially private \(M : \mathcal{X}^* \to \mathcal{Y}\) with the following accuracy guarantee.
For all \(x \in \mathcal{X}^*\), we have
\[\mathbb{P}\left[ f(x) \ge M(x) \ge f(x) - \mathsf{DS}_f^{2\tau}(x) \right] \ge \frac{9}{10},\]
where \(\tau = \frac{\log(1/\delta)}{\varepsilon} \cdot 2^{O(\log^* |\mathcal{Y}|)}\).</p>
</blockquote>
<p>The iterated logarithm is an unbelievably slow-growing function. Thus Theorem 6 improves on Theorems 4 and 5 in terms of the dependence on \(|\mathcal{Y}|\). However, the dependence on \(\delta\) is worse than Theorem 5 (\(\tau=\Theta(\log(1/\delta))\) versus \(\tau=\Theta(\sqrt{\log(1/\delta)})\)). (Theorem 4 achieves \(\delta=0\).)</p>
<h2 id="conclusion">Conclusion</h2>
<p>In this post we’ve covered the shifted inverse sensitivity mechanism of Fang, Dong, and Yi [<a href="https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf" title="Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022.">FDY22</a>], as well as some extensions.</p>
<p>The key takeaway is that we can privately approximate a monotone function with error scaling with the down sensitivity. This is particularly interesting in settings where the local and global sensitivities are large.
Down sensitivity is an appealing notion because it is entirely defined by the “real” dataset; its definition (<a href="#downsensitivity">Equation 1</a>) does not consider hypothetical data items that aren’t in the dataset.</p>
<p>Fang, Dong, and Yi [<a href="https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf" title="Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022.">FDY22</a>] show that the shifted inverse sensitivity mechanism attains strong instance optimality guarantees. In other words, up to logarithmic factors, no differentially private mechanism can achieve better error guarantees.</p>
<p>We can view the shifted inverse sensitivity mechanism as a reduction. It reduces the task of approximating a monotone function to a problem akin to approximating the median. (More precisely, it reduces it to a generalized interior point problem.) We think this is a neat addition to the toolkit of differentially private algorithms</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>We emphasize that user-level differential privacy is not an alternative privacy definition, rather it is the standard definition of differential privacy with a data schema allowing multiple data items per person. In contrast, most of the differential privacy literature assumes a one-to-one correspondence between people and data items. Note that we prefer the terminology “person”/”people” rather than “user”/”users.” The “user” terminology is specific to the tech industry and may be confusing in other contexts; e.g., in the context of the US Census Bureau, “users” are the entities (such as government agencies) that use data provided by the bureau, rather than the people whose data the bureau collects. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>The name “down sensitivity” is due to Cummings and Durfee [<a href="https://arxiv.org/abs/1804.08645" title="Rachel Cummings, David Durfee. Individual sensitivity preprocessing for data privacy. SODA 2020.">CD20</a>], who attribute the idea to Raskhodnikova and Smith [<a href="https://arxiv.org/abs/1504.07912" title="Sofya Raskhodnikova, Adam Smith. Efficient lipschitz extensions for highdimensional graph statistics and node private degree distributions. FOCS 2016.">RS16</a>]. The name <em>local empirical sensitivity</em> has also been used [<a href="https://arxiv.org/abs/1304.4795" title="Shixi Chen, Shuigeng Zhou. Recursive mechanism: towards node differential privacy and unrestricted joins. SIGMOD 2013.">CZ13</a>]. The \(k\)-down sensitivity should not be confused with the down sensitivity at distance \(k\), which is defined by \(\mathsf{DS}_f^{(k)}(x) := \sup \{ \mathsf{DS}_f^1(x’) : \mathrm{dist}(x,x’) \le k \}\). Note that \(\mathsf{DS}_f^k(x) \le k \cdot \mathsf{DS}_f^{(k-1)}(x)\). <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>The finiteness assumption can be relaxed somewhat, but we do need some kind of constraint on the output space to ensure utility. The surjectivity assumption simply ensures that the loss is always finite; alternatively we could allow the loss to take the value infinity. Note that we define \(\mathcal{X}^* := \bigcup_{n=0}^\infty \mathcal{X}^n\) to be the set of all finite tuples of elements in \(\mathcal{X}\); we use subset notation \(x’ \subseteq x \) to denote that \(x’\) can be obtained by removing elements from \(x\) (and potentially permuting). <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Alas, there is a technical issue we need to deal with in order to apply the exponential mechanism: The loss function is far from continuous, so there may not exist any \(y\) such that \(|\ell(x,y)-\tau|<\tau\). For example, computing the maximum of the dataset \(x=(1,1,\cdots,1)\) gives a loss function with \(\ell(x,y)=0\) for all \(y \ge 1\) and \(\ell(x,y)=n\) for all \(y < 1\); i.e., no \(y\) gives \(0<\ell(x,y)<n\). The way we fix this issue is as follows. Observe that we can decompose \(|\ell(x,y)-\tau|=\max\{\ell(x,y)-\tau,\tau-\ell(x,y)\}\). Now we define a slightly different loss function: \[\overline{\ell}(x,y) := \min\{ \mathrm{dist}(x,\tilde{x}) : \tilde{x} \subseteq x, f(\tilde{x}) < y \}. \tag{A}\] Equation A defining \(\overline{\ell}(x,y)\) differs from <a href="#loss">Equation 4</a> defining \(\ell(x,y)\) only in that we replace “\(\le\)” with “\(<\)”. The modified loss \(\overline\ell\) still has low sensitivity; the proof is identical to that of Proposition 3. Now we can run the exponential mechanism with the loss \[\ell^*(x,y) := \max\{\ell(x,y)-\tau,\tau-\overline{\ell}(x,y)\}. \tag{B}\] This loss has low sensitivity and, for \(\hat{y} = \min\{f(\tilde{x}):\tilde{x}\subseteq x, \mathrm{dist}(x,\tilde{x})\le\tau\}\), we have \(\ell(x,\hat{y})\le\tau\) and \(\overline{\ell}(x,\hat{y})>\tau\), which implies \(\ell^*(x,\hat{y}) \le 0\). Thus we can use \(\ell^*(x,y)\) in place of \(|\ell(x,y)-\tau|\) to fix this technical issue. Setting \(\tau=\left\lceil\frac{2}{\varepsilon}\log\left(\frac{|\mathcal{Y}|}{\beta}\right)\right\rceil\) and running the exponential mechanism with loss \(\ell^*\) yields Theorem 4. Specifically, the guarantee of the exponential mechanism is \(\mathbb{P}\left[ \ell^*(x,M(x)) < \frac{2}{\varepsilon}\log\left(\frac{|\mathcal{Y}|}{\beta}\right)\right]\ge 1-\beta\). Then \(\tau-\overline{\ell}(x,M(x)))< \frac{2}{\varepsilon}\log\left(\frac{|\mathcal{Y}|}{\beta}\right)\) implies \(\overline{\ell}(x,M(x))>0\), which implies \(M(x)\le f(x)\). Similarly, \(\ell(x,M(x))-\tau < \frac{2}{\varepsilon}\log\left(\frac{|\mathcal{Y}|}{\beta}\right)\) implies \(\ell(x,M(x))<2\tau\), which implies that \(M(x) \ge f(\tilde{x})\) for some \(\tilde{x}\subseteq x\) with \(\mathrm{dist}(x,\tilde{x})<2\tau\); by the definition of down sensitivity, \(|f(x)-f(\tilde{x})| \le \mathsf{DS}_f^{2\tau}(x)\) and so \(M(x) \ge f(\tilde{x}) \ge f(x) - \mathsf{DS}_f^{2\tau}(x)\), as required. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a> <a href="#fnref:4:1" class="reversefootnote" role="doc-backlink">↩<sup>2</sup></a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>To the best of our knowledge, differentially private binary search was first proposed by Blum, Ligett, and Roth [<a href="https://arxiv.org/abs/1109.2229" title="Avrim Blum, Katrina Ligett, Aaron Roth. A Learning Theory Approach to Non-Interactive Database Privacy. STOC 2008.">BLR08</a>]. This algorithmic idea has been used in various other papers [e.g., <a href="https://arxiv.org/abs/1604.04618" title="Mark Bun, Thomas Steinke, Jonathan Ullman. Make Up Your Mind: The Price of Online Queries in Differential Privacy. SODA 2017.">BSU17</a>,<a href="https://arxiv.org/abs/1706.05069" title="Vitaly Feldman, Thomas Steinke. Generalization for Adaptively-chosen Estimators via Stable Median. COLT 2017.">FS17</a>,<a href="https://arxiv.org/abs/2106.10333" title="Joerg Drechsler, Ira Globus-Harris, Audra McMillan, Jayshree Sarathy, Adam Smith. Non-parametric Differentially Private Confidence Intervals for the Median. 2021.">DGMSS21</a>] <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:b" role="doc-endnote">
<p>Note that we can also use Gaussian noise instead of Laplace noise. This would yield a slightly better accuracy guarantee for the same concentrated differential privacy guarantee. Specifically, this would give \(\tau = O\left(\sqrt{\frac1\rho \cdot \log |\mathcal{Y}| \cdot \log \left( \frac{\log | \mathcal{Y} |}{\beta}\right)}\right)\). <a href="#fnref:b" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:6" role="doc-endnote">
<p>We can shave the loglog term in Theorem 5 to get \(\tau = \Theta(\sqrt{\log|\mathcal{Y}|})\) either by using a noise-tolerant version of binary search [<a href="https://www.cs.cornell.edu/~rdk/papers/karpr2.pdf" title="Richard M. Karp, Robert Kleinberg. Noisy binary search and its applications. SODA 2007.">KK07</a>] or by using non-independent noise [<a href="https://journalprivacyconfidentiality.org/index.php/jpc/article/view/648/631" title="Thomas Steinke, Jonathan Ullman. Between Pure and Approximate Differential Privacy. JPC 2016">SU15</a>,<a href="https://arxiv.org/abs/2010.01457" title="Arun Ganesh, Jiazheng Zhao. Privately Answering Counting Queries with Generalized Gaussian Mechanisms. 2020.">GZ20</a>,<a href="https://arxiv.org/abs/2012.09116" title="Badih Ghazi, Ravi Kumar, Pasin Manurangsi. On Avoiding the Union Bound When Answering Multiple Differentially Private Queries. COLT 2021.">GKM21</a>,<a href="https://arxiv.org/abs/2012.03817" title="Yuval Dagan, Gil Kur. A bounded-noise mechanism for differential privacy. COLT 2022.">DK22</a>]. <a href="#fnref:6" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Thomas SteinkeTue, 12 Sep 2023 10:00:00 -0700
https://differentialprivacy.org/down-sensitivity/
https://differentialprivacy.org/down-sensitivity/Beyond Global Sensitivity via Inverse Sensitivity<p>The most well-known and widely-used method for achieving differential privacy is to compute the true function value \(f(x)\) and then add Laplace or Gaussian noise scaled to the <em>global sensitivity</em> of \(f\).
This may be overly conservative. In this post we’ll show how we can do better.</p>
<p>The global sensitivity of a function \(f : \mathcal{X}^* \to \mathbb{R}\) is defined by \[ \mathsf{GS}_f := \sup_{x,x’\in\mathcal{X}^* : \mathrm{dist}(x,x’) \le 1} |f(x)-f(x’)|, \tag{1}\] where \(\mathrm{dist}(x,x’)\le 1\) denotes that \(x\) and \(x’\) are neighbouring datasets (i.e. they differ only by the addition, removal, or replacement of one person’s data); more generally, \(\mathrm{dist}(\cdot,\cdot)\) is the corresponding metric on datasets (i.e., Hamming distance).<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup></p>
<p>The global sensitivity considers datasests that have nothing to do with the dataset at hand and which could be completely unrealistic.
Many functions have infinite global sensitivity, but, on reasonably nice datasets, their <em>local sensitivity</em> is much lower.</p>
<h2 id="local-sensitivity">Local Sensitivity</h2>
<p>The \(k\)-local sensitivity<sup id="fnref:a" role="doc-noteref"><a href="#fn:a" class="footnote" rel="footnote">2</a></sup> of a function \(f : \mathcal{X}^* \to \mathbb{R}\) at \(x \in \mathcal{X}^*\) is defined by \[\mathsf{LS}^k_f(x) := \sup_{x’\in\mathcal{X}^* : \mathrm{dist}(x,x’) \le k} |f(x)-f(x’)|. \tag{2}\]
Often, we fix \(k=1\) and we may drop the superscript: \(\mathsf{LS}_f(x) := \mathsf{LS}_f^1(x)\).
Note that the local sensitivity is always at most the global sensitivity: \(\mathsf{LS}_f^k(x) \le k \cdot \mathsf{GS}_f\).</p>
<p>As a concrete example, the median has infinite global sensitivity, but for realistic data the local sensitivity is quite reasonable.
Specifically, \[\mathsf{LS}^k_{\mathrm{median}}(x_1, \cdots, x_n) = \max\left\{ \left|x_{(\tfrac{n+1}{2})}-x_{(\tfrac{n+1}{2}+k)}\right|, \left|x_{(\tfrac{n+1}{2})}-x_{(\tfrac{n+1}{2}-k)}\right| \right\},\tag{3}\] where \( x_{(1)} \le x_{(2)} \le \cdots \le x_{(n)}\) denotes the input in <a href="https://en.wikipedia.org/wiki/Order_statistic">sorted order</a> and \(n\) is assumed to be odd, so, in particular, \(\mathrm{median}(x_1, \cdots, x_n) = x_{(\tfrac{n+1}{2})}\).
For example, if \(X_1, \cdots X_n\) are i.i.d. samples from a standard Gaussian and \(k \ll n\), then \(\mathsf{LS}^k_{\mathrm{median}}(X_1, \cdots, X_n) \le O(k/n)\) with high probability.</p>
<h2 id="using-local-sensitivity">Using Local Sensitivity</h2>
<p>Intuitively, the local sensitivity is the “real” sensitivity of the function and the global sensitivity is only a worst-case upper bound.
Thus it seems natural to add noise scaled to the local sensitivity instead of the global sensitivity.</p>
<p>Unfortunately, naïvely adding noise scaled to local sensitivity doesn’t satisfy differential privacy.
The problem is that the local sensitivity itself can reveal information.
For example, consider the median on the inputs \(x=(1,2,2),x’=(2,2,2)\). The output distributions of the algorithm on these two inputs must be similar.
In both cases the median is \(2\), so that is a good start for ensuring that the distributions are similar.
But the local sensitivity is different: \(\mathsf{LS}^1_{\mathrm{median}}(x)=1\) versus \(\mathsf{LS}^1_{\mathrm{median}}(x’)=0\).
So, if we add noise scaled to local sensitivity, then, on input \(x’\), we deterministically output \(2\), while, on input \(x\), we output a random number. If we use continuous Laplace or Gaussian noise, then the random number will be a non-integer almost surely. Thus the output perfectly distinguishes the two inputs, which is a catastrophic violation of differential privacy.</p>
<p>The good news is that we can exploit local sensitivity; we just need to do a bit more work.
In fact, there are many methods in the differential privacy literature to exploit local sensitivity.</p>
<p>The best-known methods for exploiting local sensitivity are <em>smooth sensitivity</em> [<a href="https://cs-people.bu.edu/ads22/pubs/NRS07/NRS07-full-draft-v1.pdf" title="Kobbi Nissim, Sofya Raskhodnikova, Adam Smith. Smooth Sensitivity and Sampling in Private Data Analysis. STOC 2007.">NRS07</a>]<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">3</a></sup> and <em>propose-test-release</em> [<a href="https://www.stat.cmu.edu/~jinglei/dl09.pdf" title="Cynthia Dwork, Jing Lei. Differential Privacy and Robust Statistics. STOC 2009.">DL09</a>]<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">4</a></sup>.</p>
<p>In this post we will cover a different general-purpose technique. This technique is folklore.<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">5</a></sup> It was first systematically studied by Asi and Duchi [<a href="https://arxiv.org/abs/2005.10630" title="Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.">AD20</a>,<a href="https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html" title="Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.">AD20</a>], who also named the method the <em>inverse sensitivity mechanism</em>.</p>
<h2 id="the-inverse-sensitivity-mechanism">The Inverse Sensitivity Mechanism</h2>
<p>Consider a function \(f : \mathcal{X}^* \to \mathcal{Y}\).
Our goal is to estimate \(f(x)\) in a differentially private manner.
But we do not make any assumptions about the global sensitivity of the function.</p>
<p>For simplicity we will assume that \(\mathcal{Y}\) is finite and that \(f\) is <a href="https://en.wikipedia.org/wiki/Surjective_function">surjective</a>.<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">6</a></sup></p>
<p>Now we define a loss function \(\ell : \mathcal{X}^* \times \mathcal{Y} \to \mathbb{Z}_{\ge0}\) by \[\ell(x,y) := \min\left\{ \mathrm{dist}(x,\tilde{x}) : \tilde{x}\in\mathcal{X}^*, f(\tilde{x})=y \right\}.\tag{4}\]
In other words, \(\ell(x,y)\) measures how many entries of \(x\) we need to add or remove until \(f(x)=y\).
Yet another way to think of it is that \(\ell(x,y)\) is the distance from the point \(x\) to the set \(f^{-1}(y)\). (Hence the name inverse sensitivity.)</p>
<p>The loss is minimized by the desired answer: \(\ell(x,f(x))=0\). Intuitively, the loss \(\ell(x,y)\) increases as \(y\) moves further from \(f(x)\). So approximately minimizing this loss should produce a good approximation to \(f(x)\), as desired.</p>
<p>The trick is that this loss always has bounded global sensitivity – i.e., \(\mathsf{GS}_\ell \le 1\) – no matter what the sensitivity of \(f\) is!</p>
<blockquote>
<p><strong>Lemma 1.</strong> Let \(f : \mathcal{X}^* \to \mathcal{Y}\) be arbitrary and define \(\ell : \mathcal{X}^* \times \mathcal{Y} \to \mathbb{Z}_{\ge0}\) as in Equation 4. Then, for all \(x,x’\in\mathcal{X}^*\) with \(\mathrm{dist}(x,x’)\le 1\) and all \(y \in \mathcal{Y}\), we have \(|\ell(x,y)-\ell(x’,y)|\le 1\).</p>
</blockquote>
<blockquote>
<p><em>Proof.</em>
Fix \(x,x’\in\mathcal{X}^*\) with \(\mathrm{dist}(x,x’)\le 1\) and \(y \in \mathcal{Y}\).
Let \(\widehat{x} \in\mathcal{X}^*\) satisfy \(\ell(x,y)=\mathrm{dist}(x,\widehat{x})\) and \(f(\widehat{x})=y\).
By definition, \[\ell(x’,y) = \min\left\{ \mathrm{dist}(x’,\tilde{x}) : f(\tilde{x})=y \right\} \le \mathrm{dist}(x’,\widehat{x}).\]
By the triangle inequality, \[\mathrm{dist}(x’,\widehat{x}) \le \mathrm{dist}(x’,x)+\mathrm{dist}(x,\widehat{x}) \le 1 + \ell(x,y).\]
Thus \(\ell(x’,y) \le \ell(x,y)+1\) and, by symmetry, \(\ell(x,y) \le \ell(x’,y)+1\), as required. ∎</p>
</blockquote>
<p>This means that we can run the exponential mechanism [<a href="https://ieeexplore.ieee.org/document/4389483" title="Frank McSherry, Kunal Talwar. Mechanism Design via Differential Privacy. FOCS 2007.">MT07</a>] to select from \(\mathcal{Y}\) using the loss \(\ell\).<sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="footnote" rel="footnote">7</a></sup> That is, the inverse sensitivity mechanism is defined by
\[\forall y \in \mathcal{Y} ~~~~~ \mathbb{P}[M(x)=y] ;= \frac{\exp\left(-\frac{\varepsilon}{2}\ell(x,y)\right)}{\sum_{y’\in\mathcal{Y}}\exp\left(-\frac{\varepsilon}{2}\ell(x,y’)\right)}.\tag{5}\]
By the properties of the exponential mechanism and Lemma 1, \(M\) satisfies differential privacy:</p>
<blockquote>
<p><strong>Theorem 2. (Privacy of the Inverse Sensitivity Mechanism)</strong> Let \(M : \mathcal{X}^* \to \mathcal{Y}\) be as defined in Equation 5 with the loss from Equation 4. Then \(M\) satisfies \(\varepsilon\)-differential privacy (<a href="/exponential-mechanism-bounded-range/">and \(\frac18\varepsilon^2\)-zCDP</a>).</p>
</blockquote>
<h2 id="utility-guarantee">Utility Guarantee</h2>
<p>The privacy guarantee of the inverse sensitivity mechanism is easy and, in particular, it doesn’t depend on the properties of \(f\).
This means that the utility will need to depend on the properties of \(f\).</p>
<p>By the standard properties of the exponential mechanism, we can guaranatee that the output has low loss:</p>
<blockquote>
<p><strong>Lemma 3.</strong> Let \(M : \mathcal{X}^* \to \mathcal{Y}\) be as defined in Equation 5 with the loss from Equation 4. For all inputs \(x \in \mathcal{X}^*\) and all \(\beta\in(0,1)\), we have \[\mathbb{P}\left[\ell(x,M(x)) < \frac2\varepsilon\log\left(\frac{|\mathcal{Y}|}{\beta}\right) \right] \ge 1-\beta.\tag{6}\]</p>
</blockquote>
<blockquote>
<p><em>Proof.</em>
Let \(B_x = \left\{ y \in \mathcal{Y} : \ell(x,y) \ge \frac2\varepsilon\log\left(\frac{|\mathcal{Y}|}{\beta}\right) \right\}\) be the subset of \(\mathcal{Y}\) with high loss.
Then \[ \mathbb{P}[M(x)\in B_x] = \frac{\sum_{y \in B_x} \exp\left(-\frac{\varepsilon}{2}\ell(x,y)\right)}{\sum_{y’\in\mathcal{Y}}\exp\left(-\frac{\varepsilon}{2}\ell(x,y’)\right)} \]\[ \le \frac{|B_x| \cdot \exp\left(-\frac{\varepsilon}{2}\frac2\varepsilon\log\left(\frac{|\mathcal{Y}|}{\beta}\right) \right)}{\exp\left(-\frac{\varepsilon}{2}\ell(x,f(x))\right)}\]\[= \frac{|B_x| \cdot \frac{\beta}{|\mathcal{Y}|}}{1} \le \beta, \] as required. ∎</p>
</blockquote>
<p>Now we need to translate this loss bound into something easier to interpret – local sensitivity.</p>
<p>Suppose \(y \gets M(x)\). Then we have some loss \(k=\ell(x,y)\). What this means is that there exists \(\tilde{x}\in\mathcal{X}^*\) with \(f(\tilde{x})=y\) and \(\mathrm{dist}(x,\tilde{x})\le k\). By the definition of local sensitivity, \(|f(x)-y| = |f(x)-f(\tilde{x})| \le \mathsf{LS}_f^k(x)\). This means we can translate the loss guarantee of Lemma 3 into an accuracy guarantee in terms of local sensitivity:</p>
<blockquote>
<p><strong>Theorem 4. (Utility of the Inverse Sensitivity Mechanism)</strong> Let \(M : \mathcal{X}^* \to \mathcal{Y}\) be as defined in Equation 5 with the loss from Equation 4. For all inputs \(x \in \mathcal{X}^*\) and all \(\beta\in(0,1)\), we have \[\mathbb{P}\left[\left|M(x)-f(x)\right| \le \mathsf{LS}_f^k(x) \right] \ge 1-\beta,\tag{7}\] where \(k=\left\lfloor\frac2\varepsilon\log\left(\frac{|\mathcal{Y}|}{\beta}\right)\right\rfloor\).</p>
</blockquote>
<p>We can tie this back to our concrete example of the median. Per Equation 3, \[\mathsf{LS}^k_{\mathrm{median}}(x_1, \cdots, x_n) \le \left|x_{(\tfrac{n+1}{2}+k)}-x_{(\tfrac{n+1}{2}-k)}\right| .\]
Thus the error guarantee of Theorem 4 for the median would scale with the spread of the data. E.g., if \(k=\tfrac{n+1}{4}\), then \(\mathsf{LS}^k_{\mathrm{median}}(x_1, \cdots, x_n)\) is at most the interquartile range of the data.</p>
<p>How does this compare with the usual global sensitivity approach?
The \(\varepsilon\)-differentially private Laplace mechanism is given by \(\widehat{M}(x):=f(x)+\mathsf{Laplace}(\mathsf{GS}_f/\varepsilon)\). For all \(x \in \mathcal{X}^*\) and all \(\beta\in(0,1/2)\), we have the utility guarantee \[\mathbb{P}\left[\left|\widehat{M}(x)-f(x)\right| \le \mathsf{GS}_f \cdot \frac1\varepsilon \log\left(\frac{1}{2\beta}\right) \right] \ge 1-\beta.\tag{8}\]
Comparing Equations 7 and 8, we see that neither guarantee dominates the other. On one hand, the local sensitivity can be much smaller than the global sensitivity. On the other hand, we pick up a dependence on \(\log|\mathcal{Y}|\). In particular, in the worst case where the local sensitivity matches the global sensitivity \(\mathsf{LS}_f^k(x)=k\cdot\mathsf{GS}_f\), the inverse sensitivity mechanism is worse by a factor of \[\frac{\mathsf{LS}_f^k(x)}{\mathsf{GS}_f \cdot \frac1\varepsilon \log\left(\frac{1}{2\beta}\right)} = 2 \frac{\log(2|\mathcal{Y}|)}{\log(1/2\beta)}+2.\tag{9}\]
Hence the inverse sensitivity mechanism is most useful in situations where the local sensitivity is significantly smaller than the global sensitivity.</p>
<h2 id="conclusion">Conclusion</h2>
<p>In this post we’ve covered the inverse sensitivity mechanism and showed that it is private regardless of the sensitivity of the function \(f\) and we showed that it gives error guarantees that scale with the local sensitivity of \(f\), rather than its global sensitivity.</p>
<p>The inverse sensitivity mechanism is a simple demonstration that there is more to differential privacy than simply adding noise scaled to global sensitivity; there are many more techniques in the literature.</p>
<p>The inverse sensitivity mechanism has two main limitations. First, it is, in general, not computationally efficient. Computing the loss function is intractable for an arbitrary \(f\) (but can be done efficiently for several examples like the median and variants of principal component analysis and linear regression [<a href="https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html" title="Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.">AD20</a>]). Second, the \(\log|\mathcal{Y}|\) term in the accuracy guarantee is problematic when the output space is large, such as when we have high-dimensional outputs.
While there are other techniques that can be used instead of inverse sensitivity, they suffer from some of the same limitations. Thus finding ways around these limitations is an <a href="/colt23-bsp/">active research topic</a> [<a href="https://arxiv.org/abs/1905.13229" title="Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu. Private Hypothesis Selection. NeurIPS 2019.">BKSW19</a>,<a href="https://cse.hkust.edu.hk/~yike/ShiftedInverse.pdf" title="Juanru Fang, Wei Dong, Ke Yi. Shifted Inverse: A General Mechanism for Monotonic Functions under User Differential Privacy. CCS 2022.">FDY22</a>,<a href="https://arxiv.org/abs/2212.05015" title="Samuel B. Hopkins, Gautam Kamath, Mahbod Majid, Shyam Narayanan. Robustness Implies Privacy in Statistical Estimation. STOC 2023.">HKMN23</a>,<a href="https://arxiv.org/abs/2301.07078" title="John Duchi, Saminul Haque, Rohith Kuditipudi. A Fast Algorithm for Adaptive Private Mean Estimation. COLT 2023.">DHK23</a>,<a href="https://arxiv.org/abs/2301.12250" title="Gavin Brown, Samuel B. Hopkins, Adam Smith. Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions. COLT 2023.">BHS23</a>,<a href="https://arxiv.org/abs/2302.01855" title="Hilal Asi, Jonathan Ullman, Lydia Zakynthinou. From Robustness to Privacy and Back. 2023.">AUZ23</a>].</p>
<p>The inverse sensitivity mechanism’s accuracy can be shown to be instance-optimal up to logarithmic factors [<a href="https://arxiv.org/abs/2005.10630" title="Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.">AD20</a>,<a href="https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html" title="Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.">AD20</a>]. That is, up to logarithmic factors, no differentially private mechanism can achieve better error guarantees.
Up to logarithmic factors, the inverse sensitivity mechanism outperforms other methods for exploiting local sensitivity, namely smooth sensitivity [<a href="https://cs-people.bu.edu/ads22/pubs/NRS07/NRS07-full-draft-v1.pdf" title="Kobbi Nissim, Sofya Raskhodnikova, Adam Smith. Smooth Sensitivity and Sampling in Private Data Analysis. STOC 2007.">NRS07</a>]<sup id="fnref:2:1" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">3</a></sup> and propose-test-release [<a href="https://www.stat.cmu.edu/~jinglei/dl09.pdf" title="Cynthia Dwork, Jing Lei. Differential Privacy and Robust Statistics. STOC 2009.">DL09</a>]<sup id="fnref:3:1" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">4</a></sup>.</p>
<p>We leave you with a riddle: What can we do if even the local sensitivity of our function is unbounded? For example, suppose we want to approximate \(f(x) = \max_i x_i\). Surprisingly, there are still things we can do; see <a href="/down-sensitivity/">our follow-up post</a>.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>We define \(\mathcal{X}^* = \bigcup_{n = 0}^\infty \mathcal{X}^n\) to be the set of all input tuples of arbitrary size. The metric \(\mathrm{dist} : \mathcal{X}^* \times \mathcal{X}^* \to \mathbb{R}\) can be arbitrary. E.g. we can allow addition, removal, and/or replacement of an individual’s data. For simplicity, we consider univariate functions here. But the definitions of global and local sensitivity easily extend to to vector-valued functions by taking a norm: \[ \mathsf{GS}_f := \sup_{x,x’\in\mathcal{X}^* : \mathrm{dist}(x,x’) \le 1} \|f(x)-f(x’)\|.\] If we use the 2-norm, then this cleanly corresponds to adding spherical Gaussian noise. The 1-norm corresponds to adding independent Laplace noise to the coordinates. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:a" role="doc-endnote">
<p>The local sensitivity is also known as the <em>local modulus of continuity</em> [<a href="https://arxiv.org/abs/2005.10630" title="Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.">AD20</a>,<a href="https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html" title="Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.">AD20</a>]. Note that this should not be confused with the local sensitivity at distance \(k\) [<a href="https://cs-people.bu.edu/ads22/pubs/NRS07/NRS07-full-draft-v1.pdf" title="Kobbi Nissim, Sofya Raskhodnikova, Adam Smith. Smooth Sensitivity and Sampling in Private Data Analysis. STOC 2007.">NRS07</a>], which is defined by \(\sup \{ \mathsf{LS}_f^1(x’) : \mathrm{dist}(x,x’) \le k \}\). <a href="#fnref:a" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Briefly, smooth sensitivity is an upper bound on the local sensitivity which itself has low sensitivity in a multiplicative sense. That is, \(\mathsf{LS}_f^1(x) \le \mathsf{SS}_f^t(x)\) and \(\mathsf{SS}_f^t(x) \le e^t \cdot \mathsf{SS}_f^t(x’) \) for neighbouring \(x,x’\). This suffices to ensure that we can add noise scaled to \(\mathsf{SS}_f^t(x)\). However, that noise usually needs to be more heavy-tailed than for global sensitivity [<a href="https://proceedings.neurips.cc/paper/2019/hash/3ef815416f775098fe977004015c6193-Abstract.html" title="Mark Bun, Thomas Steinke. Average-Case Averages: Private Algorithms for Smooth Sensitivity and Mean Estimation. NeurIPS 2019.">BS19</a>]. <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a> <a href="#fnref:2:1" class="reversefootnote" role="doc-backlink">↩<sup>2</sup></a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>Roughly, the propose-test-release framework computes an upper bound on the local sensitivity in a differentially private manner and then uses this upper bound as the noise scale. (We hope to give more detail about both propose-test-release and smooth sensitivity in future posts.) <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a> <a href="#fnref:3:1" class="reversefootnote" role="doc-backlink">↩<sup>2</sup></a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>Properly attributing the inverse sensitivity mechanism is difficult. The earliest published instances of the inverse sensitivity mechanism of which we are aware of are from 2011 and 2013 [<a href="https://www.cs.columbia.edu/~rwright/Publications/pods11.pdf" title="Darakhshan Mir, S. Muthukrishnan, Aleksandar Nikolov, Rebecca N. Wright. Pan-private algorithms via statistics on sketches. PODS 2011.">MMNW11</a>§3.1,<a href="hhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC4681528/" title="Aaron Johnson, Vitaly Shmatikov. Privacy-preserving data exploration in genome-wide association studies. KDD 2013.">JS13</a>§5]; but this was not novel even then. Asi and Duchi [<a href="https://arxiv.org/abs/2005.10630" title="Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.">AD20</a>§1.2] state that McSherry and Talwar [<a href="https://ieeexplore.ieee.org/document/4389483" title="Frank McSherry, Kunal Talwar. Mechanism Design via Differential Privacy. FOCS 2007.">MT07</a>] considered it in 2007. In any case, the name we use was coined in 2020 [<a href="https://arxiv.org/abs/2005.10630" title="Hilal Asi, John Duchi. Near Instance-Optimality in Differential Privacy. 2020.">AD20</a>]. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>Assuming that the output space \(\mathcal{Y}\) is finite is a significant assumption. While it can be relaxed a bit [<a href="https://papers.nips.cc/paper/2020/hash/a267f936e54d7c10a2bb70dbe6ad7a89-Abstract.html" title="Hilal Asi, John Duchi. Instance-optimality in differential privacy via approximate inverse sensitivity mechanisms. NeurIPS 2020.">AD20</a>], it is to some extent an unavoidable limitation [<a href="https://arxiv.org/abs/1504.07553" title="Mark Bun, Kobbi Nissim, Uri Stemmer, Salil Vadhan. Differentially Private Release and Learning of Threshold Functions. FOCS 2015.">BNSV15</a>,<a href="https://arxiv.org/abs/1806.00949" title="Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran. Private PAC learning implies finite Littlestone dimension. STOC 2019.">ALMM19</a>]. For example, to apply the inverse sensitivity mechanism to the median, we must discretize and bound the inputs; bounding the inputs does impose a finite global sensitivity, but the dependence on the bound is logarithmic, so the bound can be fairly large. Assuming that the function is surjective is a minor assumption that ensures that the loss in Equation 4 is always well-defined; otherwise we can define the loss to be infinite for points that are not in the range of the function. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:6" role="doc-endnote">
<p>Note that we can use other selection algorithms, such as permute-and-flip [<a href="https://arxiv.org/abs/2010.12603" title="Ryan McKenna, Daniel Sheldon. Permute-and-Flip: A new mechanism for differentially private selection. NeurIPS 2020.">MS20</a>] or report-noisy-max [<a href="https://arxiv.org/abs/2105.07260" title="Zeyu Ding, Daniel Kifer, Sayed M. Saghaian N. E., Thomas Steinke, Yuxin Wang, Yingtai Xiao, Danfeng Zhang. The Permute-and-Flip Mechanism is Identical to Report-Noisy-Max with Exponential Noise. 2021.">DKSSWXZ21</a>] or gap-max [<a href="https://arxiv.org/abs/1409.2177" title="Kamalika Chaudhuri, Daniel Hsu, Shuang Song. The Large Margin Mechanism for Differentially Private Maximization. NIPS 2014.">CHS14</a>,<a href="https://dl.acm.org/doi/10.1145/3188745.3188946" title=" Mark Bun, Cynthia Dwork, Guy N. Rothblum, Thomas Steinke. Composable and versatile privacy via truncated CDP. STOC 2018.">BDRS18</a>,<a href="https://arxiv.org/abs/1905.13229" title="Mark Bun, Gautam Kamath, Thomas Steinke, Zhiwei Steven Wu. Private Hypothesis Selection. NeurIPS 2019.">BKSW19</a>]. <a href="#fnref:6" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Thomas SteinkeTue, 05 Sep 2023 09:00:00 -0700
https://differentialprivacy.org/inverse-sensitivity/
https://differentialprivacy.org/inverse-sensitivity/Covariance-Aware Private Mean Estimation, Efficiently<p>Last week, the Mark Fulk award for best student paper at <a href="https://learningtheory.org/colt2023/">COLT 2023</a> was awarded to the following two papers on private mean estimation:</p>
<ul>
<li><a href="https://arxiv.org/abs/2301.07078">A Fast Algorithm for Adaptive Private Mean Estimation</a>, by <a href="https://web.stanford.edu/~jduchi/">John Duchi</a>, <a href="https://dblp.org/pid/252/5821.html">Saminul Haque</a>, and <a href="https://web.stanford.edu/~rohithk/">Rohith Kuditipudi</a> <strong>[<a href="https://arxiv.org/abs/2301.07078">DHK23</a>]</strong>;</li>
<li><a href="https://arxiv.org/abs/2301.12250">Fast, Sample-Efficient, Affine-Invariant Private Mean and Covariance Estimation for Subgaussian Distributions</a> by <a href="https://cs-people.bu.edu/grbrown/">Gavin Brown</a>, <a href="https://www.samuelbhopkins.com/">Samuel B. Hopkins</a>, and <a href="https://cs-people.bu.edu/ads22/">Adam Smith</a> <strong>[<a href="https://arxiv.org/abs/2301.12250">BHS23</a>]</strong>.</li>
</ul>
<p>The main result of both papers is the same: the first computationally-efficient \(O(d)\)-sample algorithm for differentially-private Gaussian mean estimation in Mahalanobis distance.
In this post, we’re going to unpack the result and explain what this means.</p>
<p>Gaussian mean estimation is a classic statistical task: given \(X_1, \dots, X_n \in \mathbb{R}^d\) sampled i.i.d. from a \(d\)-dimensional Gaussian \(N(\mu, \Sigma)\), output an vector \(\hat \mu \in \mathbb{R}^d\) that approximates the true mean \(\mu \in \mathbb{R}^d\).
But what do we mean by <em>approximates</em>?
What distance measure should we use?
A reasonable first guess is the \(\ell_2\)-norm: output an estimate \(\hat \mu\) that minimizes \(\|\hat \mu - \mu\|_2\).</p>
<p>However, we would ideally measure the quality of an estimate in an <em>affine-invariant</em> manner: if the problem instance (i.e., the estimate, the dataset, and the underlying distribution) is shifted and rescaled, then the error should remain unchanged.
Affine invariance allows us to perform such transformations of our data and not artificially make the problem easier or harder.
This property clearly isn’t satisfied by the \(\ell_2\)-norm: simply scaling the problem down would allow us to report an estimate with arbitrarily low error.
In other words, the distance metric needs to be calibrated to the covariance \(\Sigma \in \mathbb{R}^{d \times d}\).</p>
<p>Instead, we consider error measured according to the <em>Mahalanobis distance</em>: output an estimate \(\hat \mu\) that minimizes \(\|\Sigma^{-1/2}(\hat \mu - \mu)\|_2\), where \(\Sigma\) is the (unknown) covariance of the underlying distribution.
Note that, if the covariance matrix \(\Sigma = I\), then this reduces to the \(\ell_2\)-distance.
Indeed, a valid interpretation of the Mahalanobis distance is to imagine rescaling the problem so that the covariance \(\Sigma\) is mapped to the identity matrix, and measuring \(\ell_2\)-distance after this transformation.
A common way to think about Mahalanobis distance operationally is that it necessitates a more accurate estimate in directions with small variance, while permitting more error in directions with large variance.</p>
<p>OK, so how do we learn the mean of a Gaussian in Mahalanobis distance?
In the non-private setting, the answer is simple: just take the empirical mean \(\hat \mu = \frac{1}{n} \sum_{i=1}^n X_i\)!
It turns out that with \(O(d)\) samples, the empirical mean provides an accurate estimate (in Mahalanobis distance) of the true mean \(\mu\).
Note that these guarantees hold regardless of the true covariance matrix \(\Sigma\).</p>
<p>It isn’t quite so easy when we want to do things privately.
The most natural way would be add noise to the empirical mean.
However, we first have to “clip” the datapoints (i.e., rescale any points that are “too large”) in order to limit the sensitivity of this statistic.
This is where the challenges arise: we would ideally like to clip the data based on the shape of the (unknown) covariance matrix \(\Sigma\) <strong>[<a href="https://arxiv.org/abs/1805.00216">KLSU19</a>]</strong>.
Deviating significantly from \(\Sigma\) would either introduce bias due to clipping too many points, or add excessive amounts of noise.
Unfortunately, the covariance matrix \(\Sigma\) is unknown, and privately estimating it (in an appropriate metric) requires \(\Omega(d^{3/2})\) samples <strong>[<a href="https://arxiv.org/abs/2205.08532">KMS22</a>]</strong>.
This is substantially larger than the \(O(d)\) sample complexity of non-private Gaussian mean estimation.
Furthermore, this covariance estimation step really is the bottleneck.
Given a coarse estimate of \(\Sigma\), only \(O(d)\) additional samples are required to estimate the mean privately in Mahalanobis distance.
This leads to the intriguing question: is it possible to privately estimate the mean of a Gaussian <em>without</em> explicitly estimating the covariance matrix?</p>
<p>The answer is yes!
A couple years back, Brown, Gaboardi, Smith, Ullman, and Zakynthinou <strong>[<a href="https://arxiv.org/abs/2106.13329">BGSUZ21</a>]</strong> gave two different algorithms for private Gaussian mean estimation in Mahalanobis distance, which both require only \(O(d)\) samples.
Interestingly, the two algorithms are quite different from each other.
One simply adds noise to the empirical mean based on the empirical covariance matrix.
The other one turns to a technique from robust statistics, sampling a point with large <em>Tukey depth</em> using the exponential mechanism.
As described here, neither of these methods is differentially private yet – they additionally require a pre-processing step which checks if the dataset is sufficiently well-behaved, which happens with high probability when the data is generated according to a Gaussian distribution.
The major drawback of both algorithms: they require exponential time to compute.</p>
<p>The two awarded papers <strong>[<a href="https://arxiv.org/abs/2301.07078">DHK23</a>]</strong> and <strong>[<a href="https://arxiv.org/abs/2301.12250">BHS23</a>]</strong> resolve this issue, giving the first <em>computationally efficient</em> \(O(d)\) sample algorithms for private mean estimation in Mahalanobis distance.
Interestingly, the algorithms in both papers follow the same recipe as the first algorithm mentioned above: add noise to the empirical mean based on the empirical covariance matrix.
The catch is that the empirical mean and covariance are replaced with <em>stable</em> estimates of the empirical mean and covariance, where stability bounds how much the estimators can change due to modification of individual datapoints.
Importantly, these stable estimators are efficient to compute.
Further details of these subroutines are beyond the scope of this post, but the final algorithm simply adds noise to the stably-estimated mean based on the stably-estimated covariance.
Different extensions of these results are explored in the two papers, including estimation of covariance, and mean estimation in settings where the distribution may be heavy-tailed or rank-deficient.</p>
<p>Most of the algorithms described above are based on some notion of <em>robustness</em>, thus suggesting connections to the mature literature on robust statistics.
These connections have been explored as far back as 2009, in foundational work by Dwork and Lei <strong>[<a href="https://dl.acm.org/doi/10.1145/1536414.1536466">DL09</a>]</strong>.
Over the last couple of years, there has been a flurry of renewed interest in links between robustness and privacy, including, e.g., <strong>[<a href="https://arxiv.org/abs/1905.13229">BKSW19</a>, <a href="https://arxiv.org/abs/2002.09464">KSU20</a>, <a href="https://arxiv.org/abs/2112.03548">KMV22</a>, <a href="https://arxiv.org/abs/2111.06578">LKO22</a>, <a href="https://arxiv.org/abs/2111.12981">HKM22</a>, <a href="https://arxiv.org/abs/2211.00724">GH22</a>, <a href="https://arxiv.org/abs/2212.05015">HKMN23</a>, <a href="https://arxiv.org/abs/2212.08018">AKTVZ23</a>, <a href="https://arxiv.org/abs/2302.01855">AUZ23</a>]</strong>, beyond those mentioned above.
For example, some works <strong>[<a href="https://arxiv.org/abs/2211.00724">GH22</a>, <a href="https://arxiv.org/abs/2212.05015">HKMN23</a>, <a href="https://arxiv.org/abs/2302.01855">AUZ23</a>]</strong> show that, under certain conditions, a robust estimator implies a private one, and vice versa.
The two awarded papers expand this literature in a somewhat different direction – the type of stability property considered leads to algorithms which qualitatively differ from those considered prior.
It will be interesting to see how private and robust estimation evolve together over the next several years.</p>
<p>Congratulations once more to the authors of both awarded papers on their excellent results!</p>
Gautam KamathMon, 17 Jul 2023 12:00:00 -0400
https://differentialprivacy.org/colt23-bsp/
https://differentialprivacy.org/colt23-bsp/Call for Papers - TPDP 2023 - Submission deadline July 7<p>The <a href="https://tpdp.journalprivacyconfidentiality.org/2023/">9th Workshop on the Theory and Practice of Differential Privacy (TPDP 2023)</a> will take place in Boston September 27-28, 2023.
This is the first year the workshop is a standalone event. However, the <a href="https://opendp.org/event/opendp-community-meeting-2023">OpenDP community meeting</a> is the following day (also in Boston). It is also moving from a one-day event to two days.</p>
<p>The workshop is intended to bring together the DP research community to discuss new developments over the past year. The workshop is non-archival, so does not preclude publishing the work elsewhere.</p>
<p>The submission deadline is July 7. Submissions should be 4 pages (plus references and appendices.)</p>
<p>Submission website: <a href="https://hcrp.cs.uchicago.edu">https://hcrp.cs.uchicago.edu</a></p>
Thomas SteinkeWed, 28 Jun 2023 00:01:00 +0000
https://differentialprivacy.org/tpdp2023/
https://differentialprivacy.org/tpdp2023/Open problem - Better privacy guarantees for larger groups<p>Consider a simple query counting the number of people in various mutually exclusive groups.
In the differential privacy literature, it is typical to assume that each of these groups should be subject to the same privacy loss: the noise added to each count has the same magnitude, and everyone gets the same privacy guarantees.
However, in settings where these groups have vastly different population sizes, larger populations may be willing to accept more error in exchange for stronger privacy protections.
In particular, in many use cases, <em>relative</em> error (the noisy count is within 5% of the true value) matters more than absolute error (the noisy count is at a distance of at most 100 of the true value).
This leads to a natural question: can we use this fact to develop a mechanism that improves the privacy guarantees of individuals in larger groups, subject to a constraint on relative error?</p>
<h3 id="problem-definition">Problem definition</h3>
<p>Our goal is to obtain a mechanism which minimizes the overall privacy loss for each group without exceeding a relative error threshold for each group.
To formalize this goal, we first define a notion of per-group privacy we call group-wise zero-concentrated differential privacy as follows.</p>
<p><strong>Definition.</strong> <em>Group-wise zero-concentrated differential privacy.</em>
Assume possible datasets consist of records from domain \(U\), and \(U\) can be partitioned into \(k\) fixed, disjoint groups \(U_1\), …, \(U_k\). Let \(v : \mathcal{D} \rightarrow \mathbb{R}^k\) be a function associating a dataset to a vector of privacy budgets (one per group). We say a mechanism \(\mathcal{M}\) satisfies \(v\)-group-wise zero-concentrated differential privacy (zCDP) if for any two datasets \(D\), \(D’\) differing in the addition or removal of a record in \(U_i\), and for all \(\alpha>1\), we have:
\[
D_\alpha\left(\mathcal{M}(D||\mathcal{M}(D’)\right) \le \alpha \cdot {v(D)}_i
\]
\[
D_\alpha\left(\mathcal{M}(D’)||\mathcal{M}(D)\right) \le \alpha \cdot {v(D)}_i
\]
where \(D_\alpha\) is the Rényi divergence of order \(\alpha\).</p>
<p>This definition is similar to <em>tailored DP</em>, defined in [<a href="https://eprint.iacr.org/2014/982.pdf">LP15</a>]: each individual gets a different privacy guarantee, depending on which group they belong to;
this guarantee also depends on how many people are in this group.
We use zCDP as our definition of privacy due to its compatibility with the Gaussian mechanism; the same idea could easily be applied to other definitions like with Rényi DP or pure DP.</p>
<p>From there we can give a more formal definition of the problem as follows. The goal is to minimize the privacy loss for each individual group, while keeping the error under a given threshold.
For larger groups that can accept more noise, this means adding more noise to achieve the smallest possible privacy loss.</p>
<p><strong>Problem.</strong>
Let \(r \in (0,1]\) be an acceptable level of relative error, and \(k\) be the number of distinct, mutually-exclusive partitions of domain \(X\).
Given a dataset \(D\), let \(x(D)\) be a vector containing the count of records in each partition.
The objective is to find a mechanism \(\mathcal{M}\) which takes in \(r\), \(k\), and \(D\) and outputs \(\hat{x}(D)\) such that \(E\left[\left|{x(D)}_i-{\hat{x}(D)}_i\right|\right]<r\cdot {x(D)}_i\) for all \(i\), and satisfies \(v\)-group-wise zCDP where \(v(D)_i\) is as small as possible for all \(i\).
<br />
To prevent pathological mechanisms that optimize for specific datasets, we add two constraints to the problem: the privacy guarantee \(v(D)_i\) should only depend on \(x(D)_i\), and should be nonincreasing with \(x(D)_i\).</p>
<p>Since the relative error thresholds are proportional to the population size, each population can tolerate a different amount of noise.
This means that to minimize the privacy loss for each group, the mechanism must add noise of different scales to each group.
Of course, directly using \(x(D)_i\) to determine the scale of the noise for group \(i\) leads to a privacy loss which is data dependent, similarly to e.g. PATE [<a href="https://openreview.net/forum?id=HkwoSDPg">PAEGT17</a>], and as such should be treated as a protected value.</p>
<h3 id="an-example-mechanism">An example mechanism</h3>
<p>An example mechanism that seems like it could address this problem is as follows.
First, perform the original counting query and add Gaussian noise to satisfy \(\rho\)-zCDP.
Then, add additional Gaussian noise to each count, with a variance that depends on the noisy count itself — adding more noise to larger groups.
This mechanism is outlined in Algorithm 1.</p>
<p><strong>Algorithm 1.</strong>
<em>Adding data-dependent noise as a post-processing step.</em>
<br />
Require: A dataset \(D\) where each data point belongs to one of \(k\) groups, a privacy parameter \(\rho\), and a relative error rate \(r\).</p>
<ol>
<li>Let \(\sigma^2 = 1/(2\rho)\)</li>
<li><strong>For</strong> \(i=1\) to \(k\) <strong>do</strong></li>
<li>\(\qquad\) Let \(x_i\) be the number of people in \(D\) in group \(i\)</li>
<li>\(\qquad\) Sample \(X_i \sim \mathcal{N}(x_i, \sigma^2)\)</li>
<li>\(\qquad\) Sample \(Y_i \sim \mathcal{N}_{k}(X_i, (rX_i)^2)\)</li>
<li><strong>end for</strong></li>
<li><strong>return</strong> \(Y_1,\dots,Y_k\)</li>
</ol>
<p>Algorithm 1 achieves this goal of having approximately \(r\) error in each group: the total variance error of the mechanism is \(\sigma^2 + (rX)^2\), and \(X\) is a zCDP measure of \(f(D)\).
This mechanism satisfies at least \(\rho\)-zCDP: line 4 is an invocation of the Gaussian mechanism with privacy parameter \(\rho\), and line 5 is a post processing step and as such preserves the zCDP guarantee.
We would like to show that this algorithm also satisfies a stronger group-wise zCDP guarantee.</p>
<p>This makes intuitive sense: line 5 adds additional Gaussian noise without using the private data directly.
Since the noise scale in line 5 is proportional to the total count in line 4, we expect the privacy guarantee to be significantly stronger for large groups with more noise.
Further, we can verify experimentally that when the data magnitude is large compared to the noise, the output distribution for each group is close to a Gaussian distribution.</p>
<p>The below figure illustrates this finding.
We plot 1,000,000 sample outputs of Algorithm 1 (red) with parameters \(\sigma^2 = 100\) and \(r= 0.3\), and compare it to the best fit Gaussian distribution (black outline) with mean \(10,002.6\) and standard deviation of \(2995.1\).</p>
<p><img src="../images/two-stage-noise-gaussian.png" width="70%" alt="A comparison between sample outputs of Algorithm 1 and the best-fit Gaussian distribution, showing that both match very closely." style="margin:auto;display: block;" /></p>
<p>With parameters such as these, the output of the mechanism looks and behaves like a Gaussian distribution, which should be ideal to characterize the zCDP guarantee.
However, it is difficult to directly quantify this guarantee, due to the changing variance which is also a random variable.
Likewise, if the true count is close to zero or if the first instance of noise is large compared to the true count than the resulting distribution takes on a heavy skew and is no longer similar to a single Gaussian distribution.
Such distributions with randomized variances have not, to the best of our knowledge, been considered much in the literature, and we do not know whether the mechanism’s output distribution follows some well-studied distribution.</p>
<p>The randomized variance also makes it difficult to bound the Rényi divergence of the distribution and characterize the zCDP guarantees directly.
Current privacy amplification techniques are insufficient, as those techniques consider adding additional noise where the noise parameters are independent of the data itself.</p>
<p>Perhaps the most promising direction to understand more about such processes is the area of stochastic differential equations, where it is common to study noise with data-dependent variance.
The Bessel process [<a href="http://www.stat.ucla.edu/~ywu/research/documents/StochasticDifferentialEquations.pdf">Øks03</a>] is an example of such a process, where the noise is dependent on the current value.
This process captures the noise added as post-processing (Line 5), but not the initial noise-addition step (Line 4).
Furthermore, to the best of our knowledge, the Bessel process and other value-dependent stochastic differential equations do not have closed-form solutions.</p>
<h3 id="goal">Goal</h3>
<p>We see two possible paths forward to address the original question. One path would be to obtain an analysis of Algorithm 1 which shows non-trivial improved privacy guarantees for larger groups.
We tried multiple approaches, but could not prove such a result.</p>
<p>An alternative path would be to develop a different algorithm, which achieves better privacy guarantees for larger groups while maintaining the error below the relative error threshold for all groups.</p>
David PujolDamien DesfontainesMon, 26 Jun 2023 21:00:00 -0400
https://differentialprivacy.org/open-problem-better-privacy-guarantees-for-larger-groups/
https://differentialprivacy.org/open-problem-better-privacy-guarantees-for-larger-groups/Composition Basics<p>Our data is subject to many different uses. Many entities will have access to our data and those entities will perform many different analyses that involve our data. The greatest risk to privacy is that an attacker will combine multiple pieces of information from the same or different sources and that the combination of these will reveal sensitive details about us.
Thus we cannot study privacy leakage in a vacuum; it is important that we can reason about the accumulated privacy leakage over multiple independent analyses, which is known as <em>composition</em>. We have <a href="/privacy-composition/">previously discussed</a> why composition is so important for differential privacy.</p>
<p>This is the first in a series of posts on <em>composition</em> in which we will explain in more detail how compositoin analyses work.</p>
<p>Composition is quantitative. The differential privacy guarantee of the overall system will depend on the number of analyses and the privacy parameters that they each satisfy. The exact relationship between these quantities can be complex. There are various composition theorems that give bounds on the overall parameters in terms of the parameters of the parts of the system.</p>
<p>The simplest composition theorem is what is known as basic composition, which applies to pure \(\varepsilon\)-DP (although it can be extended to approximate \((\varepsilon,\delta)\)-DP):</p>
<blockquote>
<p><strong>Theorem</strong> (Basic Composition)
Let \(M_1, M_2, \cdots, M_k : \mathcal{X}^n \to \mathcal{Y}\) be randomized algorithms. Suppose \(M_j\) is \(\varepsilon_j\)-DP for each \(j \in [k]\).
Define \(M : \mathcal{X}^n \to \mathcal{Y}^k\) by \(M(x)=(M_1(x),M_2(x),\cdots,M_k(x))\), where each algorithm is run independently. Then \(M\) is \(\varepsilon\)-DP for \(\varepsilon = \sum_{j=1}^k \varepsilon_j\).</p>
</blockquote>
<p><em>Proof.</em>
Fix an arbitrary pair of neighbouring datasets \(x,x’ \in \mathcal{X}^n\) and output \(y \in \mathcal{Y}^k\).
To establish that \(M\) is \(\varepsilon\)-DP, we must show that \(e^{-\varepsilon} \le \frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} \le e^\varepsilon\). By independence, we have \[\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} = \frac{\prod_{j=1}^k\mathbb{P}[M_j(x)=y_j]}{\prod_{j=1}^k\mathbb{P}[M_j(x’)=y_j]} = \prod_{j=1}^k \frac{\mathbb{P}[M_j(x)=y_j]}{\mathbb{P}[M_j(x’)=y_j]} \le \prod_{j=1}^k e^{\varepsilon_j} = e^{\sum_{j=1}^k \varepsilon_j} = e^\varepsilon,\] where the inequality follows from the fact that each \(M_j\) is \(\varepsilon_j\)-DP and, hence, \(e^{-\varepsilon_j} \le \frac{\mathbb{P}[M_j(x)=y_j]}{\mathbb{P}[M_j(x’)=y_j]} \le e^{\varepsilon_j}\). Similarly, \(\prod_{j=1}^k \frac{\mathbb{P}[M_j(x)=y_j]}{\mathbb{P}[M_j(x’)=y_j]} \ge \prod_{j=1}^k e^{-\varepsilon_j}\), which completes the proof. ∎</p>
<p>Basic composition is already a powerful result, despite its simple proof; it establishes the versatility of differential privacy and allows us to begin reasoning about complex systems in terms of their building blocks. For example, suppose we have \(k\) functions \(f_1, \cdots, f_k : \mathcal{X}^n \to \mathbb{R}\) each of sensitivity \(1\). For each \(j \in [k]\), we know that adding \(\mathsf{Laplace}(1/\varepsilon)\) noise to the value of \(f_j(x)\) satisfies \(\varepsilon\)-DP. Thus, if we add independent \(\mathsf{Laplace}(1/\varepsilon)\) noise to each value \(f_j(x)\) for all \(j \in [k]\), then basic composition tells us that releasing this vector of \(k\) noisy values satisfies \(k\varepsilon\)-DP. If we want the overall system to be \(\varepsilon\)-DP, then we should add independent \(\mathsf{Laplace}(k/\varepsilon)\) noise to each value \(f_j(x)\).</p>
<h2 id="is-basic-composition-optimal">Is Basic Composition Optimal?</h2>
<p>If we want to release \(k\) values each of sensitivity \(1\) (as above) and have the overall release be \(\varepsilon\)-DP, then, using basic composition, we can add \(\mathsf{Laplace}(k/\varepsilon)\) noise to each value. The variance of the noise for each value is \(2k^2/\varepsilon^2\), so the standard deviation is \(\sqrt{2} k /\varepsilon\). In other words, the scale of the noise must grow linearly with the number of values \(k\) if the overall privacy and each value’s sensitivity is fixed. It is natural to wonder whether the scale of the Laplace noise can be reduced by improving the basic composition result. We now show that this is not possible.</p>
<p>For each \(j \in [k]\), let \(M_j : \mathcal{X}^n \to \mathbb{R}\) be the algorithm that releases \(f_j(x)\) with \(\mathsf{Laplace}(k/\varepsilon)\) noise added. Let \(M : \mathcal{X}^n \to \mathbb{R}^k\) be the composition of these \(k\) algorithms. Then \(M_j\) is \(\varepsilon/k\)-DP for each \(j \in [k]\) and basic composition tells us that \(M\) is \(\varepsilon\)-DP. The question is whether \(M\) satisfies a better DP guarantee than this – i.e., does \(M\) satisfy \(\varepsilon_*\)-DP for some \(\varepsilon_*<\varepsilon\)?
Suppose we have neighbouring datasets \(x,x’\in\mathcal{X}^n\) such that \(f_j(x) = f_j(x’)+1\) for each \(j \in [k]\). Let \(y=(a,a,\cdots,a) \in \mathbb{R}^k\) for some \(a \ge \max_{j=1}^k f_j(x)\).
Then
\[
\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} = \frac{\prod_{j=1}^k \mathbb{P}[f_j(x)+\mathsf{Laplace}(k/\varepsilon)=y_j]}{\prod_{j=1}^k \mathbb{P}[f_j(x’)+\mathsf{Laplace}(k/\varepsilon)=y_j]}
\]
\[
= \prod_{j=1}^k \frac{\frac{\varepsilon}{2k}\exp\left(-\frac{\varepsilon}{k} |y_j-f_j(x)| \right)}{\frac{\varepsilon}{2k}\exp\left(-\frac{\varepsilon}{k} |y_j-f_j(x’)| \right)}
= \prod_{j=1}^k \frac{\exp\left(-\frac{\varepsilon}{k} (y_j-f_j(x)) \right)}{\exp\left(-\frac{\varepsilon}{k} (y_j-f_j(x’)) \right)}
\]
\[
= \prod_{j=1}^k \exp\left(\frac{\varepsilon}{k}\left(f_j(x)-f_j(x’)\right)\right)
= \exp\left( \frac{\varepsilon}{k} \sum_{j=1}^k \left(f_j(x)-f_j(x’)\right)\right)= e^\varepsilon,
\]
where the third equality removes the absolute values because \(y_j \ge f_j(x)\) and \(y_j \ge f_j(x’)\).
This shows that basic composition is optimal. For this example, we cannot prove a better guarantee than what is given by basic composition.</p>
<p>Is there some other way to improve upon basic composition that circumvents this example? Note that we assumed that there are neighbouring datasets \(x,x’\in\mathcal{X}^n\) such that \(f_j(x) = f_j(x’)+1\) for each \(j \in [k]\). In some settings, no such worst case datasets exist. In that case, instead of scaling the noise linearly with \(k\), we can scale the Laplace noise according to the \(\ell_1\) sensitivity \(\Delta_1 := \sup_{x,x’ \in \mathcal{X}^n \atop \text{neighbouring}} \sum_{j=1}^k |f_j(x)-f_j(x’)|\).</p>
<p>Instead of adding assumptions to the problem, we will look more closely at the example above.
We showed that there exists some output \(y \in \mathbb{R}^d\) such that \(\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} = e^\varepsilon\).
However, such outputs \(y\) are very rare, as we require \(y_j \ge \max\{f_j(x),f_j(x’)\}\) for each \(j \in [k]\) where \(y_j = f_j(x) + \mathsf{Laplace}(k/\varepsilon)\). Thus, in order to observe an output \(y\) such that the likelihood ratio is maximal, all of the \(k\) Laplace noise samples must be positive, which happens with probability \(2^{-k}\).
The fact that outputs \(y\) with maximal likelihood ratio are exceedingly rare turns out to be a general phenomenon and not specific to the example above.</p>
<p>Can we improve on basic composition if we only ask for a high probability bound? That is, instead of demanding \(\frac{\mathbb{P}[M(x)=y]}{\mathbb{P}[M(x’)=y]} \le e^{\varepsilon_*}\) for all \(y \in \mathcal{Y}\), we demand \(\mathbb{P}_{Y \gets M(x)}\left[\frac{\mathbb{P}[M(x)=Y]}{\mathbb{P}[M(x’)=Y]} \le e^{\varepsilon_*}\right] \ge 1-\delta\) for some \(0 < \delta \ll 1\). Can we prove a better bound \(\varepsilon_* < \varepsilon\) in this relaxed setting? The answer turns out to be yes.</p>
<p>The limitation of pure \(\varepsilon\)-DP is that events with tiny probability – which are negligible in real-world applications – can dominate the privacy analysis. This motivates us to move to relaxed notions of differential privacy, such as approximate \((\varepsilon,\delta)\)-DP and concentrated DP, which are less sensitive to low probability events.</p>
<h2 id="preview-advanced-composition">Preview: Advanced Composition</h2>
<p>By moving to approximate \((\varepsilon,\delta)\)-DP with \(\delta>0\), we can prove an asymptotically better composition theorem, which is known as <em>the advanced composition theorem</em> <strong><a href="https://ieeexplore.ieee.org/document/5670947" title="Cynthia Dwork, Guy Rothblum, Salil Vadhan. Boosting and Differential Privacy. FOCS 2010.">[DRV10]</a></strong>.</p>
<blockquote>
<p><strong>Theorem</strong> (Advanced Composition Starting from Pure DP<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>)
Let \(M_1, M_2, \cdots, M_k : \mathcal{X}^n \to \mathcal{Y}\) be randomized algorithms. Suppose \(M_j\) is \(\varepsilon_j\)-DP for each \(j \in [k]\).
Define \(M : \mathcal{X}^n \to \mathcal{Y}^k\) by \(M(x)=(M_1(x),M_2(x),\cdots,M_k(x))\), where each algorithm is run independently. Then \(M\) is \((\varepsilon,\delta)\)-DP for any \(\delta>0\) with \[\varepsilon = \frac12 \sum_{j=1}^k \varepsilon_j^2 + \sqrt{2\log(1/\delta) \sum_{j=1}^k \varepsilon_j^2}.\]</p>
</blockquote>
<p>Recall that basic composition gives \(\delta=0\) and \(\varepsilon = \sum_{j=1}^k \varepsilon_j\). That is, basic composition scales with the 1-norm of the vector \((\varepsilon_1, \varepsilon_2, \cdots, \varepsilon_k)\), whereas advanced composition scales with the 2-norm of this vector (and the squared 2-norm).
Neither bound strictly dominates the other. However, asymptotically (in a sense we will make precise in the next paragraph) advanced composition dominates basic composition.</p>
<p>Suppose we have a fixed \((\varepsilon,\delta)\)-DP guarantee for the entire system and we must answer \(k\) queries of sensitivity \(1\).
Using basic composition, we can answer each query by adding \(\mathsf{Laplace}(k/\varepsilon)\) noise to each answer.
However, using advanced composition, we can answer each query by adding \(\mathsf{Laplace}(\sqrt{k/2\rho})\) noise to each answer, where<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup>
\[\rho = \frac{\varepsilon^2}{4\log(1/\delta)+4\varepsilon}.\]
If the privacy parameters \(\varepsilon,\delta>0\) are fixed (which implies \(\rho\) is fixed) and \(k \to \infty\), we can see that asymptotically advanced composition gives noise per query scaling as \(\Theta(\sqrt{k})\), while basic composition results in noise scaling as \(\Theta(k)\).</p>
<p> </p>
<p>In the next few posts we will explain how advanced composition works. We hope this conveys an intuitive understanding of composition and, in particular, how this \(\sqrt{k}\) asymptotic behaviour arises. If you want to read ahead, these posts are extracts from <a href="https://arxiv.org/abs/2210.00597">this book chapter</a>.</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>This result generalizes to approximate DP. If instead we assume \(M_j\) is \((\varepsilon_j,\delta_j)\)-DP for each \(j \in [k]\), then the final composition is \((\varepsilon,\delta+\sum_{j=1}^k \delta_j)\)-DP with \(\varepsilon\) as before. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>Adding \(\mathsf{Laplace}(\sqrt{k/2\rho})\) noise to a sensitivity-1 query ensures \(\varepsilon_j\)-DP for \(\varepsilon_j = \sqrt{2\rho/k}\). Hence \(\sum_{j=1}^k \varepsilon_j^2 = 2\rho\). Setting \(\rho = \frac{\varepsilon^2}{4\log(1/\delta)+4\varepsilon}\) ensures that \(\frac12 \sum_{j=1}^k \varepsilon_j^2 + \sqrt{2\log(1/\delta) \sum_{j=1}^k \varepsilon_j^2} = \rho + \sqrt{4\rho\log(1/\delta)} \le \varepsilon\). <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Thomas SteinkeTue, 01 Nov 2022 11:45:00 -0400
https://differentialprivacy.org/composition-basics/
https://differentialprivacy.org/composition-basics/Privacy Doona: Why We Should Hide Among The Clones<p>In this blog post, we will discuss a recent(ish) result of Feldman, McMillan, and Talwar <a href="https://arxiv.org/abs/2012.12803" title="Vitaly Feldman, Audra McMillan, Kunal Talwar. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling. FOCS 2021"><strong>[FMT21]</strong></a>, which provides an improved and simple analysis of the so-called “amplification by shuffling” formally connecting local privacy (LDP) and shuffle privacy.<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup> Now, I’ll assume the reader is familiar with both LDP and Shuffle DP: if not, a quick-and-dirty refresher (with less quick, and less dirty references) can be found <a href="\trustmodels">here</a>, and of course there is also Albert Cheu’s excellent survey on Shuffle DP <a href="https://arxiv.org/abs/2107.11839" title="Albert Cheu. Differential Privacy in the Shuffle Model: A Survey of Separations. arXiv 2021"><strong>[Cheu21]</strong></a>.</p>
<p>I will also ignore most of the historical details, but it is worth mentioning that <a href="https://arxiv.org/abs/2012.12803" title="Vitaly Feldman, Audra McMillan, Kunal Talwar. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling. FOCS 2021"><strong>[FMT21]</strong></a> is not the first paper on this “amplification by shuffling,” (which, for local reasons, I’ll just call a <em>privacy <a href="https://www.collinsdictionary.com/dictionary/english/doona">doona</a></em>) but rather is the culmination of a rather long line of work involving many cool ideas and papers, starting with <a href="https://arxiv.org/abs/1808.01394" title="Albert Cheu, Adam D. Smith, Jonathan Ullman, David Zeber, Maxim Zhilyaev. Distributed Differential Privacy via Shuffling. EUROCRYPT 2019"><strong>[CSUZZ19</strong></a>, <a href="https://arxiv.org/abs/1811.12469" title="Úlfar Erlingsson, Vitaly Feldman, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Abhradeep Thakurta. Amplification by Shuffling: From Local to Central Differential Privacy via Anonymity. SODA 2019"><strong>EFMRTT19]</strong></a>: I’d refer the reader to <strong>Table 1</strong> in <a href="https://arxiv.org/abs/2012.12803" title="Vitaly Feldman, Audra McMillan, Kunal Talwar. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling. FOCS 2021"><strong>[FMT21]</strong></a> for an overview.</p>
<p>Alright, now that the caveats are behind us, what <em>is</em> “amplification by shuffling”? In a nutshell, it is capturing the (false!) intuition that “anonymization provides privacy” (which, again, is false! Don’t do this!) and making it… less false. The idea is that while <em>anonymization does not provide in itself any meaningful privacy guarantee</em>, it can <em>amplify existing, rigorous privacy guarantee</em>. So if I start with a somewhat lousy LDP guarantee, but then all the messages sent by all users are completely anonymized, then my lousy LDP guarantee suddenly gets <em>much</em> stronger (roughly speaking, the \(\varepsilon\) parameter goes down with the square root of of the number of users involved). Which is wonderful! Let’s see what this means, quantitatively.</p>
<h3 id="the-result-of-feldman-mcmillan-and-talwar">The result of Feldman, McMillan, and Talwar.</h3>
<p>Here, we will focus on the simpler case of <em>noninteractive</em> protocols (one-shot messages from the users to the central server, no funny business with messages going back and forth); which is conceptually simpler to state and parse, still very rich and interesting, and, well, very relevant in practice (being the easiest and cheapest to deploy). If you want the results in their full glorious generality, though, they are in the paper.</p>
<p>What the main theorem of <a href="https://arxiv.org/abs/2012.12803" title="Vitaly Feldman, Audra McMillan, Kunal Talwar. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling. FOCS 2021"><strong>[FMT21]</strong></a> is saying for this noninteractive setting can then be stated as follows: if I have an <em>\(\varepsilon_L\)-locally private</em> (LDP) protocol for a task, where all \(n\) users pass their data through the same randomizer (algorithm) \(R\) and send the resulting message \(y_i \gets R(x_i)\), then just permuting the messages \(y_1\dots,y_n\) immediately gives an \((\varepsilon,\delta)\)-<em>shuffle</em> private protocol for the same task, for any pair \((\varepsilon,\delta)\) which satisfies
<a name="eq:eps:epsL"></a>
\begin{equation}
\varepsilon \leq \log\left( 1+ 16\frac{e^{\varepsilon_{L}}-1}{e^{\varepsilon_{L}}+1}\sqrt{\frac{e^{\varepsilon_{L}}\log\frac{4}{\delta}}{n}}\right) \tag{1}
\end{equation}
as long as \(n \gg e^{\varepsilon_{L}}\log(1/\delta)\). That is quite a lot to parse, though: what does this actually <em>mean</em>?</p>
<p><strong>First</strong>, the assumption that all users have the same randomizer (or at least cannot be distinguished by their randomizer) is quite natural: if they didn’t, then we wouldn’t be able to say anything in general, since the randomizer they use could just give away their identity completely. For instance, as an extreme case, the randomizer of user \(i\) could just append \(i\) to the message (it’s OK, still LDP!), and then shuffling achieves exactly nothing: we know who sent what. So OK, asking for all randomizers to be the same is not really a restriction.</p>
<p><strong>Second</strong>, each user only sends one message, and this preserves its length (we just shuffled the messages, didn’t modify them!). So if you start with an LDP protocol with amazing features XYZ (e.g., the messages are \(1\)-bit long, or users don’t share a random seed, or the randomizers run in time \(O(1)\)), then the shuffle protocol enjoys exactly the same properties. (It only enjoys naturally some <em>robustness</em>, in the sense that if \(10\%\) if the \(n\) users maliciously deviate from the protocol, they can’t really jeopardize the privacy of the remaining \(90\%\) of users.<sup id="fnref:2" role="doc-noteref"><a href="#fn:2" class="footnote" rel="footnote">2</a></sup> Which is… good.)</p>
<p><strong>Third</strong>, this is inherently approximate DP. Here we started with pure LDP (you can also extend that to approximate LDP) and ended up with approximate Shuffle DP: this is not a mistake, that’s how it is. I am not a purist (erm) myself, and that looks more than good enough to me; but if you seek pure Shuffle DP, then this result is not the droid you’re looking for.</p>
<p align="center">
<img src="../images/droids-looking.png" width="50%" alt="This is not the pure DP guarantee you are looking for." />
</p>
<p><br /></p>
<p>Alright, <em>what</em> is this guarantee stated in <a href="#eq:eps:epsL">(1)</a> giving us? Let’s interpret the expression in <a href="#eq:eps:epsL">(1)</a> in two parameter regimes, focusing on \(\varepsilon\) (fixing some small \(\delta>0\)). If we start with \(\varepsilon_{L} \ll 1\) for our LDP randomizers \(R\), then a first-order Taylor expansion shows that we get
\[
\varepsilon \approx \varepsilon_{L}\cdot 8\sqrt{\frac{\log\frac{4}{\delta}}{n}}
\]
so that <em>shuffling improved our privacy parameter by a factor \(\sqrt{n}\)</em>.<sup id="fnref:3" role="doc-noteref"><a href="#fn:3" class="footnote" rel="footnote">3</a></sup> 😲 This is great! With more users, comes more privacy!</p>
<p>But that was starting with small \(\varepsilon_{L}\), that is, already pretty good privacy guarantees for our LDP “building block” \(R\). What happens if we start with “somewhat lousy” privacy guarantees, that is, \(\varepsilon_{L} \gg 1\)? Do we get anything interesting then?
Another Taylor expansion (everything is a Taylor expansion) shows us that, then,
<a name="eq:epsL:ll:one"></a>
\[
\begin{equation}
\varepsilon \approx \log\left( 1+ 8\sqrt{\frac{e^{\varepsilon_{L}}\log\frac{4}{\delta}}{n}}\right) \tag{2}
\end{equation}
\]
or, put differently,
<a name="eq:epsL:gg:one"></a>
\[
\begin{equation}
\varepsilon \approx 8e^{\varepsilon_{L}/2}\sqrt{\frac{\log\frac{4}{\delta}}{n}} \tag{3}
\end{equation}
\]
That’s a bit harder to interpret, but that seems… useful? It is: let us see how much, with a couple examples.</p>
<h4 id="learning">Learning.</h4>
<p>The first one is distribution learning, a.k.a. density estimation: you have \(n\) i.i.d. samples (one per user) from an unknown probability distribution \(\mathbf{p}\) over a discrete domain of size \(k\), and your goal is to output an estimate \(\widehat{\mathbf{p}}\) such that, with high (say, constant) probability, \(\mathbf{p}\) and \(\widehat{\mathbf{p}}\) are close in <em>total variation distance</em>:
\[
\operatorname{TV}(\mathbf{p},\widehat{\mathbf{p}}) = \sup_{S\subseteq [k]} (\mathbf{p}(S) - \widehat{\mathbf{p}}(S) ) \leq \alpha
\]
(if total variation distance seems a bit mysterious, it’s exactly half the \(\ell_1\) distance between the probability mass functions). We know how to solve this problem in the non-private setting: \(n=\Theta\left( \frac{k}{\alpha^2} \right)\) samples are necessary and sufficient. We know how to solve this problem in the (central) DP setting: \(n=\Theta\left( \frac{k}{\alpha^2} + \frac{k}{\alpha\varepsilon} \right)\) samples are necessary and sufficient <a href="https://proceedings.neurips.cc/paper/2015/hash/2b3bf3eee2475e03885a110e9acaab61-Abstract.html" title="Ilias Diakonikolas, Moritz Hardt, Ludwig Schmidt. Differentially Private Learning of Structured Discrete Distributions. NeurIPS 2015"><strong>[DHS15]</strong></a>. We know how to solve this problem in the LDP setting:
<a name="eq:learning:ldp"></a>
\begin{equation}
n=\Theta\left(\frac{k^2}{\alpha^2(e^\varepsilon-1)^2}+\frac{k^2}{\alpha^2e^\varepsilon}+\frac{k}{\alpha^2}\right) \tag{4}
\end{equation}
samples are necessary and sufficient <a href="http://proceedings.mlr.press/v89/acharya19a.html" title="Jayadev Acharya, Ziteng Sun, Huanyu Zhang. Hadamard Response: Estimating Distributions Privately, Efficiently, and with Little Communication. AISTATS 2019"><strong>[ASZ19]</strong></a> (note that the first term is just \(k/(\alpha^2\varepsilon^2)\) for small \(\varepsilon\)). Now, as they say in Mulan: <em>let’s make a shuffle DP algo out of you.</em></p>
<p>If we want to achieve \((\varepsilon,\delta)\)-shuffle DP, we need to select \(\varepsilon_L\). Based on <a href="#eq:epsL:ll:one">(2)</a> and <a href="#eq:epsL:gg:one">(3)</a>, and ignoring pesky constants we will choose it so that
<a name="eq:choice:epsL"></a>
\begin{equation}
\varepsilon_{L} \approx \varepsilon \sqrt{\frac{n}{\log(1/\delta)}} \quad\text{ or }\quad e^{\varepsilon_{L}} \approx \varepsilon^2 \cdot \frac{n}{\log(1/\delta)}\,. \tag{5}
\end{equation}
depending on whether \(\frac{\varepsilon^2 n}{\log(1/\delta)}\geq 1\). Plugging that back in <a href="#eq:learning:ldp">(4)</a>, we see that the first case corresponds to the first term (small \(\varepsilon_{L}\)) and the second to the second term (\(\varepsilon_{L} \geq 1\)), and overall the condition on \(n\) for the original LDP algorithm to
successful learn the distribution becomes
\[
n \gtrsim
\frac{k^2}{\alpha^2(e^{\varepsilon_{L}}-1)^2}+\frac{k^2}{\alpha^2e^{\varepsilon_{L}}}+\frac{k}{\alpha^2}
\approx \frac{k^2\log(1/\delta)}{\alpha^2\varepsilon^2 n}+\frac{k^2\log(1/\delta)}{\alpha^2\varepsilon^2 n}+\frac{k}{\alpha^2}
\approx \frac{k^2\log(1/\delta)}{\alpha^2\varepsilon^2 n}+\frac{k}{\alpha^2}
\]
(where \(\gtrsim\) means “let’s ignore constants”). There is an \(n\) in the RHS as well, so reorganizing and handling the two terms separately the condition on \(n\) becomes
\[
n \gtrsim \frac{k \sqrt{\log(1/\delta)}}{\alpha\varepsilon}+\frac{k}{\alpha^2}
\]
which… is great? We immediately get a sample complexity \(O\left(\frac{k}{\alpha^2}+\frac{k \sqrt{\log(1/\delta)}}{\alpha\varepsilon}\right)\) in the shuffle DP model, which (ignoring the \(\sqrt{\log(1/\delta)}\)) matches the one in the <em>central</em> DP setting!</p>
<p><strong>tl;dr:</strong> Taking an optimal LDP algorithm and just shuffling the messages <em>immediately</em> gives an optimal shuffle DP algorithm, no extra work needed.</p>
<h4 id="uniformity-testing">(Uniformity) Testing.</h4>
<p>Alright, maybe it was a fluke? Let’s look at another “basic” problem close to my heart: we don’t want to learn the probability distribution \(\mathbf{p}\), just test whether it is actually <em>the</em> uniform distribution<sup id="fnref:4" role="doc-noteref"><a href="#fn:4" class="footnote" rel="footnote">4</a></sup> \(\mathbf{u}\) on the domain \([k]={1,2,\dots,k}\). So if \(\mathbf{p} =\mathbf{u}\), you’ve got to say “yes” with probability at least \(2/3\), and if \(\operatorname{TV}(\mathbf{p},\mathbf{u})>\alpha\), then you need to say “no” with probability at least \(2/3\).</p>
<p>This is also well understood in the non-private setting (\(n=\Theta(\sqrt{k}/\alpha^2)\)) <a href="https://ieeexplore.ieee.org/document/4626074" title="Liam Paninski. A Coincidence-Based Test for Uniformity Given Very Sparsely Sampled Discrete Data. IEEE Trans. Inf. Theory 2008"><strong>[Paninski08]</strong></a> [see also <a href="https://ccanonne.github.io/survey-topics-dt.html}{my upcoming survey">my upcoming survey</a>], in the central DP setting (\(n=\Theta\left( \frac{\sqrt{k}}{\alpha^2} + \frac{\sqrt{k}}{\alpha\sqrt{\varepsilon}}+\frac{k^{1/3}}{\alpha^{4/3}\varepsilon^{2/3}} + \frac{1}{\alpha\varepsilon} \right)\)) <a href="https://arxiv.org/abs/1707.05128" title="Jayadev Acharya, Ziteng Sun, Huanyu Zhang. Differentially Private Testing of Identity and Closeness of Discrete Distributions. NeurIPS 2018"><strong>[ASZ18</strong></a>, <a href="https://arxiv.org/abs/1707.05497" title="Maryam Aliakbarpour, Ilias Diakonikolas, Ronitt Rubinfeld. Differentially Private Identity and Equivalence Testing of Discrete Distributions. ICML 2018"><strong>ADR18]</strong></a>, and in the LDP setting, where the result differs on whether the users can communicate or share a common random seed
<a name="eq:testing:ldp:publiccoin"></a>
\begin{equation}
n=\Theta\left( \frac{k}{\alpha^2(e^\varepsilon-1)^2} + \frac{k}{\alpha^2e^{\varepsilon/2}} + \frac{\sqrt{k}}{\alpha^2}\right) \tag{6}
\end{equation} or not
<a name="eq:testing:ldp:privatecoin"></a>
\begin{equation}
n=\Theta\left( \frac{k^{3/2}}{\alpha^2(e^\varepsilon-1)^2} + \frac{k^{3/2}}{\alpha^2e^{\varepsilon}} + \frac{\sqrt{k}}{\alpha^2}\right) \tag{7}
\end{equation}
as established in a sequence of papers <a href="https://arxiv.org/abs/1812.11476" title="Inference under Information Constraints: Lower Bounds from Chi-Square Contraction. COLT 2019"><strong>[ACT19</strong></a>, <a href="https://arxiv.org/abs/1911.01452" title="Kareem Amin, Matthew Joseph, Jieming Mao. Pan-Private Uniformity Testing. COLT 2020"><strong>AJM20</strong></a>, <a href="https://arxiv.org/abs/2101.07981" title="Jayadev Acharya, Clément L. Canonne, Cody Freitag, Ziteng Sun, Himanshu Tyagi.
Inference Under Information Constraints III: Local Privacy Constraints. IEEE J. Sel. Areas Inf. Theory 2021"><strong>ACFST21</strong></a>, <a href="https://arxiv.org/abs/2007.10976" title="Jayadev Acharya, Clément L. Canonne, Yuhan Liu, Ziteng Sun, Himanshu Tyagi. Interactive Inference Under Information Constraints. IEEE Trans. Inf. Theory 2022"><strong>ACLST22</strong></a>, <a href="https://arxiv.org/abs/2108.08987" title="Clément L. Canonne, Hongyi Lyu. Uniformity Testing in the Shuffle Model: Simpler, Better, Faster. SOSA 2022"><strong>CL22]</strong></a>.</p>
<p>Now, say you want an (\(\varepsilon,\delta)\)-shuffle DP algorithm for uniformity testing, but don’t want to design one from scratch (though it <em>is</em> possible to do so, and some did <a href="https://arxiv.org/abs/2004.09481" title="Victor Balcer, Albert Cheu, Matthew Joseph, Jieming Mao. Connecting Robust Shuffle Privacy and Pan-Privacy. SODA 2021"><strong>[BCJM21</strong></a>, <a href="https://arxiv.org/abs/2108.08987" title="Clément L. Canonne, Hongyi Lyu. Uniformity Testing in the Shuffle Model: Simpler, Better, Faster. SOSA 2022"><strong>CL22</strong></a>, <a href="https://arxiv.org/abs/2112.10032" title="Albert Cheu, Chao Yan. Pure Differential Privacy from Secure Intermediaries. arXiv 2021"><strong>CY21]</strong></a>). Let’s say you want to look at the “no-common-random-seed-shared-by-users” model (a.k.a. <em>private-coin</em> setting): so you stare at the corresponding LDP communication complexity, <a href="#eq:testing:ldp:privatecoin">(7)</a>, and try to choose \(\varepsilon_L\) to start with before shuffling. This will be the same as in the learning example (i.e., <a href="#eq:choice:epsL">(5)</a>): based on <a href="#eq:epsL:ll:one">(2)</a> and <a href="#eq:epsL:gg:one">(3)</a>, we will set
\begin{equation}
\varepsilon_{L} \approx \varepsilon \sqrt{\frac{n}{\log(1/\delta)}} \quad\text{ or }\quad e^{\varepsilon_{L}} \approx \varepsilon^2 \cdot \frac{n}{\log(1/\delta)}\,.
\end{equation}
depending on whether \(\frac{\varepsilon^2 n}{\log(1/\delta)}\geq 1\). Plugging this back in <a href="#eq:testing:ldp:privatecoin">(7)</a> and quickly checking which case corresponds to each term, we then easily get that for our algorithm to correctly solve the uniformity testing problem, it suffices that the sample complexity (number of users) \(n\) satisfies
\[
n \gtrsim \frac{k^{3/2}}{\alpha^2(e^{\varepsilon_L}-1)^2} + \frac{k^{3/2}}{\alpha^2e^{\varepsilon_L}} + \frac{\sqrt{k}}{\alpha^2}
\approx \frac{k^{3/2}\log(1/\delta)}{\alpha^2\varepsilon^2 n } + \frac{\sqrt{k}}{\alpha^2}
\]
which, reorganizing and solving for \(n\), means that it suffices to have
\[
n \gtrsim \frac{k^{3/4}\sqrt{\log(1/\delta)}}{\alpha\varepsilon} + \frac{\sqrt{k}}{\alpha^2}\,.
\]
And, <em>voilà</em>! Even better, we also have strong evidence to suspect that this sample complexity \(O\Big(\frac{k^{3/4}\sqrt{\log(1/\delta)}}{\alpha\varepsilon}+ \frac{\sqrt{k}}{\alpha^2}\Big)\) is tight among all private-coin algorithms.<sup id="fnref:5" role="doc-noteref"><a href="#fn:5" class="footnote" rel="footnote">5</a></sup></p>
<p>Now, if you wanted to look at <em>public-coin</em> shuffle DP protocols (with a common random seed available), then you would start with an optimal public-coin LDP algorithm (and look at <a href="#eq:testing:ldp:publiccoin">(6)</a>), and setting \(\varepsilon_L\) the same way you’d get a shuffle DP algorithm with sample complexity
\[
n=O\Big(\frac{k^{2/3}\log^{1/3}(1/\delta)}{\alpha^{4/3}\varepsilon^{2/3}} + \frac{\sqrt{k\log(1/\delta)}}{\alpha\varepsilon}+ \frac{\sqrt{k}}{\alpha^2}\Big)
\]
which, well, is <em>also</em> strongly believed to be optimal!</p>
<p><strong>tl;dr:</strong> Here again, taking an optimal off-the-shelf LDP algorithm and just shuffling the messages <em>immediately</em> gives an optimal shuffle DP algorithm, no extra work needed.</p>
<h3 id="conclusion">Conclusion.</h3>
<p>I hope the above convinced you of how useful this privacy amplification can be: from an optimal LDP algorithm, featuring any extra appealing characteristics you like, <em>just adding an extra shuffling step as postprocessing</em> yields an (often optimal? At least good) shuffle DP algorithm, <em>with the same characteristics</em> and built-in robustness against malicious users.</p>
<p>All you need is to make sure that your starting point, the LDP algorithm satisfies a couple things: (1) all users have the same randomizer,<sup id="fnref:6" role="doc-noteref"><a href="#fn:6" class="footnote" rel="footnote">6</a></sup> and (2) it works in all regimes of \(\varepsilon\) (both high-privacy, \(\varepsilon \leq 1\), <em>and</em> low-privacy, \(\varepsilon \gg 1\)). Once you’ve got this, Bob’s your uncle! You get shuffle DP algorithms for free.</p>
<p>It is not only appealing from a theoretical point of view, by the way! The authors of the paper worked hard to make their empirical analysis compelling as well, and their code is available <a href="https://github.com/apple/ml-shuffling-amplification">on GitHub</a> 📝. But more importantly, from a practitioner’s point of view, this means it is enough to design, implement, and test <em>one</em> algorithm (the LDP one we start with) to automatically get a trusted one in the shuffle DP model as well: this reduces the risks of bugs, security failures, the amount of work spending tuning, testing…</p>
<p>So yes, whenever possible, we <em>should</em> hide among the clones!</p>
<hr />
<div class="footnotes" role="doc-endnotes">
<ol>
<li id="fn:1" role="doc-endnote">
<p>The title of this post is a reference to the title of <a href="https://arxiv.org/abs/2012.12803" title="Vitaly Feldman, Audra McMillan, Kunal Talwar. Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy Amplification by Shuffling. FOCS 2021"><strong>[FMT21]</strong></a>, “Hiding Among The Clones,” and to the notion of <em>privacy blanket</em> introduced by Balle, Bell, Gascón, and Nissim <a href="https://link.springer.com/chapter/10.1007/978-3-030-26951-7_22" title="Borja Balle, James Bell, Adrià Gascón, Kobbi Nissim. The Privacy Blanket of the Shuffle Model. CRYPTO 2019"><strong>[BBGN19]</strong></a>. Intuitively, the “amplification by shuffling” paradigm can be seen as anonymizing the messages from local randomizers, whose message distribution can be mathematically decomposed as a mixture of “noise distribution not depending on the user’s input” and “distribution actually depending on their input.” As a result, each user randomly sends a message from the first or second distribution of the mixture. But the shuffling then hides the informative messages (drawn from the second part of the mixture) among the non-informative (noise) ones: so the noise messages end up providing a “privacy blanket” in which sensitive information is safely and soundly wrapped. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:2" role="doc-endnote">
<p>More specifically, they can completely jeopardize the <em>utility</em> (accuracy) of the result, but in terms of privacy, all they can do is slightly reduce it: if \(10\%\) of users are malicious, the remaining \(90\%\) still get the privacy amplification of guarantee of <a href="#eq:eps:epsL">(1)</a>, but with \(0.9n\) instead of \(n\). <a href="#fnref:2" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:3" role="doc-endnote">
<p>Of course, we started with a local privacy guarantee, and ended up with a shuffle privacy guarantee: so the two are incomparable, and one has to interpret this “amplification” in that context. <a href="#fnref:3" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:4" role="doc-endnote">
<p>You can here replace uniform by any known distribution \(\mathbf{q}\) of your choosing, that doesn’t change the question (and result), but uniform is nice. <a href="#fnref:4" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:5" role="doc-endnote">
<p>As long as one is happy with approximate DP. One can achieve that in pure DP as well, but it’s a bit more complicated <a href="https://arxiv.org/abs/2112.10032" title="Albert Cheu, Chao Yan. Pure Differential Privacy from Secure Intermediaries. arXiv 2021"><strong>[CY21]</strong></a>. <a href="#fnref:5" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
<li id="fn:6" role="doc-endnote">
<p>This is not such a big assumption usually, and there are somewhat-general ways to get to that using a logarithmic factor in the number of users. <a href="#fnref:6" class="reversefootnote" role="doc-backlink">↩</a></p>
</li>
</ol>
</div>
Clément CanonneTue, 24 May 2022 11:45:00 -0400
https://differentialprivacy.org/privacy-doona/
https://differentialprivacy.org/privacy-doona/