Paf Paris edited untitled.tex  about 8 years ago

Commit id: 14a4487436db204b7e7110b32c1f00a1a1d00dbf

deletions | additions      

       

\end{enumerate}  Karakasidis (and others) tackles the special case where the data to be integrated is shared among parties and privacy preservation issues arise. \textit{Privacy Preserving Record Linkage} is the problem where data from two or more heterogeneous data sources are integrated in such a way that after the integration process the only extra knowledge that each source gains related to the records which are common to the participating sources.  Relative to the above is \textit{Differential privacy}; a methodology that let lets  us concretely reason about privacy-budgeted data analysis(for analysis (for  nice examples justifying this need, refer to \cite{social-genome-2014-chang-kum}). An algorithm satisfies differential privacy if, for any two datasets D1 and D2 that differ in one row (they are \textit{close}), the ratio of the likelihood of the algorithm resulting in the same output starting from D1 and D2 is bounded by atmost e^{exp}. e^\4