Skip to main content

Activities & Results

Find out what’s going on!

Activities, awards, and other outputs

  • XXXVI Nordic Conference on Law and Information Technology, ‘Securitization, Risk, Rule of Law – and, oh yes, a Pandemic!’, 8-10 November, 2021, Oslo. Presentation of the EXTREMUM project.
  • Quality of Reasoning in Automated Judicial Decisions Conference, Örebro University, 14-15 February, 2022, Örebro. Presentation of the EXTREMUM project in relation to the conference theme.
  • Business in Democracy Initiative (BiDEM) day-long conference, Copenhagen Business School, 6 April, 2022, Copenhagen. Presentation of the EXTREMUM project in relation to the conference theme.
  • ECML/PKDD 2022: Presentation of one main conference paper (C1) and our demo paper (D1)
  • MedAI PhD Forum 2022 (two-day workshop, Sep 6-7): Presentation of EXTREMUM to the Forum partners and discussion of potential synergies with Brunel University, University of Porto, and University of Magdeburg
  • Joint Workshop between Stockholm University, KTH, and University of Manchester on explainable machine learning for healthcare (Sep 27-28 2021)


The main goal of this project is to develop and establish a novel data representation, integration, and knowledge discovery framework for medical data sources, focusing on explainable machine learning governed by legal and ethical principles. Particular emphasis will be given to healthcare data sources available in EHRs, emphasising two particular healthcare problems: (a) early prediction and treatment of cardiovascular diseases and (b) adverse drug event identification and prevention.

Project objectives, along with the current achievements and results:

Objective 1: Unified representation and integration of complex data spaces. We have explored and defined novel space representations, similarity measures, and methods for searching and indexing large and complex data spaces for complex data sources. The basic challenge is the temporal nature of the data spaces and the inherent temporal dependencies that may exist within the same and across different data sources in these spaces. Our main solutions include the adoption and employment of temporal abstractions for temporal event sequences (both time series and discrete event sequences) of univariate and multivariate events.

Objective 2: Explainable predictive models for complex data sources. Regarding the second objective, our intention is to develop novel predictive modelling algorithms that can support and exploit data sources of complex nature and heterogeneity while at the same time providing explainability of their predictions in the form of interpretable features or rule sets. Towards this end, we have developed new methods for explainable machine learning with an emphasis on example-based explanations, such as counterfactuals for time series counterfactuals, as well as for event sequences). Moreover, we have explored model-based explanations and more concretely local explanations (such as LIME and SHAP), their applicability to the medical domain, and to what extent they are trusted and can be adopted by medical practitioners. In the case of time series data, interpretability can be achieved by focusing on white-box models, such as those described by differential equations, for which we have developed and analyzed several estimation algorithms.

We have also developed novel methods to reverse engineer predictors/filters for Hidden Markov Models and Linear Dynamical Systems, rendering them explainable by “opening the box”; furthermore, by modifying the samples arriving at a learning algorithm, we are developing methods that can improve learning while preserving privacy, enforcing fairness and arriving at more interpretable models. Our future plan is to develop similar methods for counterfactual generation for forecasting (e.g., for predicting critical events in the ICU and preventing them by suggesting critical actions. Furthermore, we intend to conduct a more extensive qualitative study on the generated explanations and counterfactuals involving a larger group of medical practitioners. This will also involve a qualitative assessment and deployment of our current demonstrator tool (published recently in the hospital setting on actual use cases.

Objective 3: Adherence to legal and ethical frameworks. We have focused on the legal and ethical implications associated with the development and use of predictive modelling in relation to healthcare data analysis. To this end, the General Data Protection Regulation (GDPR) is of utmost relevance. We have identified the predominant social values promoted in the GDPR and transposed these legal rules into mathematical equivalents. The finding from this process has been interesting to the extent that the identified social values are mutually exclusive and that promoting one of them occurs at the expense of the others, necessitating a balancing act. The social values studied thus far were privacy, accuracy and explainability. The results of this examination are relevant from a legislative techniques perspective, culminating in these interdisciplinary findings being published in a legal publication venue.