About the project
Objective
Study the issue of outliers’ privacy from an information-theoretic point of view, propose an adapted privacy notion, such as pointwise maximal leakage, to solve it and design sanitising mechanisms in the light of these principled insights.
Background
The most popular privacy measure, differential privacy, can only protect outliers at the cost of destroying accuracy. Its relaxation, metric differential privacy, fails to guarantee the privacy of such isolated points.
About the Digital Futures Postdoc Fellow
Arnaud Grivet Sébert completed his PhD in CEA List, Gif-sur-Yvette, France, under the direction of Renaud Sirdey and the co-supervision of Cédric Gouy-Pailler. He proposed approaches that combine differential privacy and homomorphic encryption to protect the training data privacy in distributed machine learning.
He then worked on the privacy of textual data, and especially outliers, as a post-doctoral researcher in LIX (Laboratoire d’Informatique de l’Ecole Polytechnique), Palaiseau, France, with Catuscia Palamidessi and Sonia Vanier, and in Macquarie University, Sydney, Australia, with Annabelle McIver and Mark Dras.
He is now starting a post-doctoral contract in KTH, funded by Digital Futures and supervised by Tobias Oechtering and Martina Scolamiero. He is especially interested in the theoretical aspects of privacy, but also in its relations with other ethical properties like frugality, robustness, fairness.
Main supervisor
Tobias Oechtering, KTH
Co-supervisor
Martina Scolamiero, KTH

