EXTREMUM is bringing together expertise from different fields of science – machine learning, law and healthcare, says PI Panagiotis Papapetrou
Meet Panagiotis Papapetrou, Professor, Department of Computer and Systems Sciences at Stockholm University and PI of research project Explainable and Ethical Machine Learning for Knowledge Discovery from Medical Data Sources (EXTREMUM) at Digital Futures.
Hi there, Panagiotis Papapetrou, PI of EXTREMUM at Digital Futures. What are the challenges behind this project?
– The key challenges when it comes to using machine learning and artificial intelligence in healthcare is that there is limited trust from the side of medical practitioners and patients to opaque predictive models. This intensifies the need for mechanisms and frameworks for explainable machine learning solutions so that the end-users, practitioners, and patients cannot only receive highly accurate or statistically significant predictions but also an understandable explanation and reasoning behind these predictions.
What is the purpose of this project?
– The purpose of EXTREMUM is to provide a set of novel methods and tools that can achieve good trade-offs between predictive performance and explainability in healthcare applications. We are bringing together expertise from different fields of science, including machine learning, law and healthcare.
How is the workgroup organized, and who participates?
– The project is managed by Stockholm University and the workgroup comprises four core teams. From the Stockholm University side there are two teams. One is the Data science team from DSV that includes me as main PI, co-PI Lars Asker, postdoc Ioanna Miliou, PhD students Zhendong Wang and Luis Quintero, and Research assistant Vasiliki Kougia. Then we have the Law team, from the Department of Law, run by Stanley Greenstein. From KTH is a Decision and Control team with Co-PI Cristian Rojas and a PhD student that will start in September. And finally there’s a team from RISE for Signal processing with Co-PI Rami Mochaourab and Research asistant Sugandh Sinha.
Mention some interesting findings/conclusions? Anything that surprised you?
– So far we have developed a set of explainable techniques for counterfactuals, i.e., examples with suggested changes so that the opaque classifier changes its decision. For example, given a patient configuration and the medical history of that patient, what is it that we should change on that patient so that the predicted outcome improves? In addition, we have formulated a new workflow for ranking radiographs based on their severity, producing a set of tags (labels) that describe the medical findings, as well as some diagnostic text explaining these findings. Medical practitioners were impressed by the tagging capabilities of our system and also by some explanatory captions. Nonetheless, there is still room for improvement and further evaluation.
What is the next step? What would you like to see happen now?
– Our immediate goals are to release our first demonstrator online where the public can become familiar with our explainable machine learning solutions on publicly available datasets. Moreover, we intend to closely involve medical practitioners in providing us with a more extensive qualitative assessment of the produced explanations.
Read more and watch the video of Explainable and Ethical Machine Learning for Knowledge Discovery from Medical Data Sources (EXTREMUM)