About the project

Objective
This project aims to design and develop shape-changing textile devices that generate rich and dynamic body-centered interactions for children. Physical interactions play a vital role in children’s well-being and contribute significantly to their mental, physical and emotional development. Yet, current digital technologies often prioritise visual and auditory senses over tactile or movement-based modalities. As children’s lives become increasingly mediated by technology, we wish to explore and encourage the design of technologies that enable children (and adults too) to express themselves fully and engage in meaningful physical interactions within tech-mediated environments. We will do this by conducting co-design workshops with children and engaging in soma design methodologies that foreground the body and lived experience in the design process.

Background
This research intersects the emerging fields of soft robotics, e-textiles, soma design and machine learning. It will draw from fabrication techniques in soft robotics and e-textiles to produce shape-shaping textile artefacts that offer versatile on- and off-body interactions. Drawing from the field of machine learning and recent work in Human-Robot Interaction, we will explore ways to interpret sensor data from the devices, enabling the generation of responsive, context-aware behaviours of the interfaces. On the design side, we take a soma design approach to develop technologies that promote genuine connection, using strategies such as mediated touch and shared embodied experiences.

About the Digital Futures Postdoc Fellow
Alice Haynes studied Engineering Mathematics at the University of Bristol, UK, going on to specialize in soft robotics – the development of robotic devices made of soft materials – and haptic interfaces – the development of devices that can produce tactile feedback. Alongside her research, she worked in a local arts charity called KWMC, co-facilitating workshops and teaching fabrication skills in their community makerspace. After defending her PhD thesis in 2022, she moved to Germany for a postdoctoral position in the Human-Computer Interaction Lab at Saarland University. There she explored techniques for fabricating shape-changing textiles that could move and adapt to the body and environment. Increasingly interested in the role of our body and felt experience in interactions with such soft, tactile interfaces, she is excited to bring a soma design approach to this project.

Main supervisor
Kristina Höök, Professor, Division of Media Technology and Interaction Design, KTH

Co-supervisor
Iolanda Leite, Associate Professor, Division of Robotics, Perception and Learning, KTH

About the project

Objective
The aim of this project is to analyse the environmental impacts of increased digitalization and the use of Information and Communication Technologies. The project can include both method development and case studies. The impacts will be analysed using life cycle assessment and life cycle thinking. Case studies can vary on different scales and include specific devices, applications and sectoral assessments. Initially, the focus will be on climate impacts and energy use, but it may also be broadened to a larger spectrum of environmental impacts. Assessments will include the direct impacts of ICT but also different types of indirect impacts, including rebound effects.

Background
The ICT sector has an environmental footprint. The future development of this footprint is debated, and it is important that the discussions have a scientific basis. Digitalisation may be a tool for reducing environmental impacts. By improving efficiencies and dematerialising products and services, new ICT applications can reduce the footprints of other sectors. More studies are, however, needed in order to understand when this actually leads to decreased impacts and when there is a risk for indirect rebound effects that increase use and footprints. Environmental life cycle assessment is a standardised method for assessing potential environmental impacts of products, services and functions “from the cradle to the grave”, i.e. from the extraction of raw materials via production and uses to waste management. It is used for analysing the environmental footprints, i.e. the direct impacts, of ICT. It can also be used for analysing different types of indirect effects.

Partner Postdocs
After working in the industry on large-scale refrigeration and heat pump systems and as an entrepreneur with solar pumps, Shoaib Azizi undertook a master’s program in Sustainable Energy Engineering at KTH. He moved to Umeå in northern Sweden for a multi-disciplinary PhD project on energy-efficient renovation of buildings. His PhD included research on the opportunities for digital tools to improve management and energy efficiency in buildings. He defended his thesis “A multi-method Assessment to Support Energy Efficiency Decisions in Existing Residential and Academic Buildings” in September 2021. Now Shoaib is a Digital Futures Postdoc researcher in digitalization and climate impacts at the Department of Sustainable Development, Environmental Science and Engineering (SEED) at KTH. His research involves lifecycle assessment methodology to understand various aspects of digitalization and its impacts on the environment.

Anna Furberg defended her PhD thesis in 2020 at Chalmers University of Technology. Her thesis, titled “Environmental, Resource and Health Assessments of Hard Materials and Material Substitution: The Cases of Cemented Carbide and Polycrystalline Diamond”, involved Life Cycle Assessment (LCA) case studies and method development. After her thesis, she worked at the Norwegian Institute for Sustainability Research, NORSUS, on various LCA projects and, in several cases, as the project leader. In 2022, she was awarded the SETAC Europe Young Scientist Life Cycle Assessment Award, which recognizes exceptional achievements by a young scientist in the field of LCA. Anna has a Digital Futures Postdoc position in digitalization and climate impacts at the Department of Sustainable Development, Environmental Science and Engineering (SEED) at KTH.

Supervisor
Göran Finnveden is a Professor of Environmental Strategic Analysis at the Department of Sustainable Development, Environmental Sciences and Engineering at KTH. He is also the director of the Mistra Sustainable Consumption research program. His research is focused on sustainable consumption and life cycle assessment, and other sustainability assessment tools. The research includes method development and case studies in different areas, including the environmental impacts of ICT.

About the project

Objective
The purpose of this project is to develop better methods for reconstructing time-resolved medical images with multiple image channels. Traditional methods often process these measurement subsets separately and then combine the results, but this approach doesn’t always lead to the best images. The challenge is to integrate the extra information from the start, during the image reconstruction process itself, in a way that enhances the final result. This is where artificial intelligence (AI) and deep learning come in.

Deep learning models have shown great promise in tackling complex tasks by learning patterns from large amounts of data. However, in medical imaging, data is often scarce, and the computational challenges are significant. A fully data-driven approach is unlikely to succeed. Instead, our project will explore ways to build AI models that incorporate the known relationships between measurement subsets directly into their design. This will allow us to develop efficient and lightweight models that improve image quality while remaining practical to use in real-world clinical settings.

By applying these new methods to spectral CT and PET, we aim to produce medical images with greater diagnostic power, helping doctors detect and treat diseases more effectively. Additionally, our approach will be designed in a flexible, “plug-and-play” manner, so that it can be adapted for other types of imaging in the future. With this research, we hope to take an important step toward more accurate, reliable, and informative medical imaging for patients and healthcare providers alike.

Background
Medical imaging techniques like Computed Tomography (CT) and Positron Emission Tomography (PET) allow doctors to see inside the human body without surgery. However, these images are not captured directly like a photograph. Instead, they are computed from indirect measurements of light particles (photons) passing through the body.

Imagine looking at the shadows cast by an object in different directions and trying to piece together what the object looks like in three dimensions. This is similar to how medical images are reconstructed from projection data. Some examinations build on acquiring multiple image channels, such as in spectral CT where X-ray images are acquired at multiple energy levels, or combining different modalities such as PET and CT. It is also possible to acquire a video sequence of multiple images in rapid succession. Combining these different kinds of information has the potential to improve image quality, making diagnoses more accurate, but how this should be done in an effective way is far from a simple question.

Cross-disciplinary collaboration
Developing novel medical imaging methodology is a highly cross-disciplinary activity which requires involvement of physics, mathematics, computer science, engineering, and medical science. In this project, which is a collaboration between the department of physics (SCI), the department of biomedical engineering and health systems (CBH), and the department of mathematics (SCI) at KTH, we bring together expertise in mathematics and in two different imaging modalities, CT and PET, to develop common methodology that can be applied to multiple medical imaging modalities.

About the project

Objective

The overall goal of the project is to develop optimization frameworks to assist in process design, learning accurate models from process data, and support optimal decision-making. The optimization algorithms and frameworks will be tailored towards the needs and processes of interest for LKAB. The type of optimization problems considered in the project falls into the category of “Gray-box” optimization, where parts of the optimization problem are known analytically, but some parts are given by a simulator. The simulator is not necessarily a complete black box, but the simulation model can be too complex to be integrated directly into the optimization model.

Designing industrial processes is a challenging task and often requires the use of advanced design and simulation software. For example, accurately describing the process and physical phenomena involved often requires the solution of systems of partial differential equations (PDEs) or a Computational Fluid Dynamics (CFD) simulation. Therefore, evaluating the impact of simply changing a single design parameter can require running a CFD or solving a large system of PDEs and may also require access to chemical and thermodynamic libraries. Solving such systems is a challenge on its own, and in practice, requires the use of advanced simulation software. The main downside of such software is the inherent black-box nature; you can evaluate a single specific design choice at a time, but the end-user does not gain more knowledge. In practice, when such software is used for process design the engineers often follow some common rules of thumb and trial and error to come up with a good design that can be evaluated by software and later put into production. However, the resulting design might be far from globally optimal, i.e., there might exist a far more superior design. Employing such a sub-optimal solution can, e.g., result in increased environmental impact due to a higher energy consumption, increased raw material usage and waste. Furthermore, when used in investment planning and feasibility studies, the sub-optimal designs can cause superior technologies/solutions to seem unreasonably expensive and maybe even economically infeasible. Therefore, there is a strong need for frameworks that enable the combination of simulation software with optimization algorithms to find optimal process designs. 

Background

LKAB is an international mining and minerals group that supplies sustainable iron ore, minerals and specialty products. Since 1890, the company has evolved through unique innovations and technology solutions and is driven forward by more than 4,500 employees in 12 countries. The company is the largest supplier of iron ore in the European Union and a key player in the transformation of the iron and steel industry towards sustainability. LKAB’s goal is to develop carbon-free processes and products by 2045.

The work to eliminate carbon emissions creates new challenges and opportunities. Potential routes and new ideas are continuously being investigated and evaluated. As the production processes are complex and experiments often expensive and time-consuming, the feasibility and potential of new alternatives are often investigated numerically through computer simulations. Currently, computer simulations are used in a black-box fashion, i.e., you can evaluate a specific configuration of process parameters, but you don’t gain any more information or knowledge of the best configurations. Therefore, it is of utmost importance to utilize the full potential of the models and to find the best solutions. This project aims to provide a key component for a systematic model-based design approach.

Increasing computational power, increasing amount of data, and overall digitalization are continuously boosting the potential benefits of extensive computational simulations in process design, but how to best utilize simulators is not trivial. To learn and extract knowledge from the process simulations, the project aims to develop deterministic optimization methods and frameworks for determining optimal designs and operations.

Crossdisciplinary collaboration

The project is a collaboration between LKAB and KTH, where we are focusing on developing optimization algorithms suitable for tackling challenging optimization tasks in industrial production processes. The project brings together expert knowledge from Process Systems Engineering, Applied Mathematics, Optimization, and Process Simulation.

Participating in the project:

About the project

Objective
This project aims to develop and evaluate algorithms for dynamic inference offloading in Scania’s fleet, optimizing the trade-off between model accuracy, latency, and resource consumption. The proposed cascaded inference approach ensures that data is first processed by small models at the edge (M1). Depending on inference confidence, decisions are made in real-time on whether to offload tasks to more complex models (M2-M4) with higher accuracy but increased computational costs.

The core objectives include: (1) designing algorithms that improve inference accuracy while minimizing latency and bandwidth usage; (2) providing theoretical guarantees for the proposed methods, particularly in terms of regret minimization; and (3) benchmarking these algorithms using real-world data from Scania’s operational vehicles. The research will contribute to optimizing deep learning model deployment, ensuring scalable and efficient AI integration in industrial applications while maintaining cost-effectiveness and system reliability.

Background
Deep Learning (DL) models have become a standard for data-driven tasks such as classification and predictive analytics due to their high accuracy. However, their computational and memory demands often require cloud-based deployment, which introduces challenges like latency, bandwidth consumption, and security concerns. In response, edge computing has gained traction, enabling inference on resource-constrained Edge Devices (EDs) such as IoT sensors, mobile devices, and autonomous vehicles. While edge deployment reduces communication delays and enhances data privacy, small models often suffer from lower accuracy.

Scania, a global manufacturer of commercial vehicles, faces similar challenges in deploying DL models across its fleet for tasks such as autonomous driving, predictive maintenance, and sustainable operations. There exists a trade-off between accuracy and efficiency when placing models at different computation points. This project explores cascaded inference, where small models operate locally, and only complex cases are offloaded to more powerful computing resources, balancing accuracy with cost efficiency.

Partner Postdocs
This project brings together experts from multiple disciplines to address the challenges of deploying DL models efficiently across Scania’s fleet. Researchers from machine learning, optimization, embedded systems, and automotive engineering will collaborate to develop cascaded inference strategies that optimize accuracy, latency, and resource usage.

Scania’s senior data scientists, Dr. Sophia Zhang Pettersson and Dr. Kuo-Yun Liang, provide real-world insights into vehicle data, predictive maintenance, and cost modelling. Associate Prof. Lei Feng adds knowledge in Bayesian optimization and deep learning techniques for edge computing.

This collaboration ensures that theoretical advancements in machine learning align with practical deployment challenges in commercial vehicles. By integrating perspectives from academia and industry, the project fosters innovation in scalable AI solutions, leading to efficient, adaptive, and cost-effective DL deployment across connected fleets.

Supervisor
KTH researchers, led by Prof. James Gross, contribute expertise in hierarchical inference, algorithm development, and performance guarantees.

About the project

Objective
This project will explore the neural correlates of human-human and human-robot conversations, with the goal of creating adaptive social robots capable of fostering more meaningful interactions. Social robots can assist people in societal situations such as health care, elderly care, education, public spaces and homes.

Our newly developed telepresence system for human-robot interaction, allowed participants to situate themselves in natural conversations while physically in a functional magnetic resonance imaging scanner. Each participant interacted directly with a human-like robot or ahuman actor while lying in the scanner. In our previous research pairs project, we used this telepresence interface to create the pioneering NeuroEngage fMRI dataset.

This project aims to advance the understanding of conversational engagement by integrating neuroscience, human-robot interaction (HRI), and artificial intelligence. Engagement plays a crucial role in effective communication, yet its underlying brain mechanisms and real-time detection remain largely unexplored. We will use the NeuroEngage dataset and complement it with additional multimodal features like facial expressions, audio embeddings, and detailed annotations of engagement levels. By using multimodal machine learning (MML), this research will develop models capable of detecting and responding to engagement levels in social interactions.

Background
In everyday conversations, a speaker and a listener are involved in a common project that relies on close coordination, requiring each participant’s continuous attention and related engagement. However, current engagement detection methods lack robustness and often rely on superficial behavioral cues, without considering the underlying neural mechanisms that drive engagement. Prior research has demonstrated the feasibility of engagement detection using multimodal signals, but most existing datasets are limited in their scope and do not incorporate neuroimaging data.

In our previous work, by analyzing two different datasets, we have shown that listening to arobot recruits more activity in sensory regions, including auditory and visual areas. We also have observed strong indications that speaking to a human, compared to the robot, recruitsmore activity in frontal regions associated with socio-pragmatic processing, i.e. considering the other’s viewpoint and factoring in what to say next. Additional comparisons of this sort will be enabled by expanding our dataset and refining machine learning models for engagement prediction. As a result, this project will help with AI-driven conversational adaptivity, advancing research in both HRI and neuroscience.

Crossdisciplinary collaboration
The researchers in the team represent the Department of Intelligent Systems, division of Speech Music and Interaction at KTH EECS, the Psychology Department and the Linguistics Department at Stockholm University. This project integrates neuroscience, linguistics, social robotics, and AI to study how humans engage in conversations with both humans and robots.

About the project

Objective
Medical doctors often face difficulty to choose a set of medicines from many options available for a patient. Medication is expected to be disease-specific as well as person-specific. Individual patients may respond differently to the same medication, so the selection of medication should be personalized to everyone’s needs and medical history. In this project, we will explore how AI (artificial intelligence) can help doctors to identify existing medications and/or therapies that can be repurposed for the treatment of dementia.

Dementia is a large-scale health care problem, where around 10% of the population more than 65 years of age suffers from it. Therefore, if AI can assist clinicians in medication selection for dementia patients, it would lead to a significant improvement in the efficiency of treatment. AI can also predict decline (or worsening) of a patient’s health condition over time. Clinicians and healthcare systems will then get precious time to decide life-saving interventions. This heralds use of AI-based medications. The pressing question: can we trust AI systems, mainly its core called machine learning (ML) for patient data analysis and predictions to doctors? Can the ML algorithms explain their predictions to the doctors? In a joint collaboration with Karolinska University Hospital (KUH), Karolinska Institute (KI) and KTH, we will develop trust-worthy ML algorithms with explainable results, and then explore the algorithms to discover new uses for approved medications that were originally developed for different medical conditions.

Background
XML based medication repurposing for dementia (XMLD) refers to the development and application of XML algorithms to identify potential drugs among existing drugs or medications that can be repurposed for the treatment or management of dementia. The goal is to develop XML algorithms to discover new uses for approved drugs or therapies that were originally developed for different medical conditions in patients with dementia. Therefore, for a patient and/or a class of patients, identification of potential drugs among many existing drugs is a variable selection problem, where XML can help.

Therefore, The XML algorithms will be developed to analyze and identify patterns, relationships, and potential associations between drug characteristics, disease severity, and patient outcomes. There are many advantages of medication repurposing for dementia using XML, such as cost and time saving, safety profile, broad range of medication candidates, and improved treatment efficiency. Overall, it addresses a pressing healthcare problem with potentially widespread impact. While our focus is dementia in this project, the accumulated technological knowledge can be used for medication repurposing of many other health problems and diseases in clinics. The proposed XMLD project will establish a strong cooperation between medical doctors and ML researchers in the clinical environment.

Partner Postdoc
Xinqi Bao

Main supervisor
Saikat Chatterjee

Co-supervisor(s)
Martina Scolamiero