Skip to main content

Distinguished Lecture: Catuscia Palamidessi – Director of Research at INRIA

Save to calendar

Feb 09

Date and time: 9 February 2024, 10:00-11:00 CET
Speaker: Catuscia Palamidessi, Director of Research at INRIA – Leader of the equipe Comète
Title: Statistical and information-theoretic methods for privacy and fairness

Where: Digital Futures hub, Osquars Backe 5, floor 2 at KTH main campus OR Zoom
Directions: https://www.digitalfutures.kth.se/contact/how-to-get-here/
OR
Zoom: https://kth-se.zoom.us/j/69560887455
Meeting ID: 695 6088 7455

Moderator & Administrator: Tobias Oechtering, oech@kth.se

Abstract: The increasingly pervasive use of big data and machine learning is raising various ethical issues, in particular, privacy and fairness.  In the area of privacy protection, differential privacy (DP) and its variants are the most successful approaches to date and are based on of adding controlled noise to the data. In particular, the local model of DP is appealing as the noise is added directly by the data owner; hence it does not require any trusted third party.

One of the fundamental issues of DP is to find mechanisms that provide the desired level of privacy while retaining as much utility as possible, and this is particularly challenging in the case of the local model of DP  because we have to consider both the utility from the point of view of the original data owner (quality of service, QoS) and the utility of the data analysts (statistical utility).

In this talk, I will discuss a local DP mechanism based on information theory (rate-distortion theory), namely the Blahut-Arimoto (BA) algorithm for generating a stochastic channel which achieves Pareto-optimality between mutual information (MI) and distortion, which in our context corresponds to a basic notion of QoS. The resulting mechanism not only minimizes MI, but also satisfies (metric) DP.

Then, I will discuss a method to enhance statistical utility by de-noising the empirical distribution generated by the mechanism. The “Iterative Bayesian Update” (IBU) method is a special case of the expectation-maximization method from statistics. I will then show a surprising duality between the noise BA (noise addition) and IBU (noise removal), due to which the two methods work particularly well together and achieve a good privacy-utility trade-off.

Finally, I will consider the problem of fairness in machine learning. I will show that the IBU can also be applied in this domain to recover latent variables and enhance both fairness and accuracy.

Biography: Catuscia Palamidessi received a PhD degree from the University of Pisa in 1988. She is the research director at INRIA Futurs, where she leads the team Comète. She worked as a full professor at the University of Genova, Italy (1994-1997) and the Pennsylvania State University (1998-2002).

Her research interests include concurrency, distributed systems, and security. Her past achievements include the proof of expressiveness gaps between various concurrent calculi and developing a probabilistic version of the asynchronous $\pi$ -calculus. Her current research is in mobile calculi, probability, and using probabilistic concepts in concurrency and security.

She has been the program committee chair of various conferences, including MFPS 2008, SOFSEM 2008, ICALP 2005, and CONCUR 2000. She is on the editorial board of Mathematical Structures in Computer Science (MSCS), Theory and Practice of Logic Programming (TPLP), and Electronic Notes in Theoretical Computer Science (ENTCS). She is a member of the Executive Committee of the European Association of Theoretical Computer Science (EATCS).

Link to the profile of Catuscia Palamidessi