About the project
Objective
This project aims to design and develop shape-changing textile devices that generate rich and dynamic body-centered interactions for children. Physical interactions play a vital role in children’s well-being and contribute significantly to their mental, physical and emotional development. Yet, current digital technologies often prioritise visual and auditory senses over tactile or movement-based modalities. As children’s lives become increasingly mediated by technology, we wish to explore and encourage the design of technologies that enable children (and adults too) to express themselves fully and engage in meaningful physical interactions within tech-mediated environments. We will do this by conducting co-design workshops with children and engaging in soma design methodologies that foreground the body and lived experience in the design process.
Background
This research intersects the emerging fields of soft robotics, e-textiles, soma design and machine learning. It will draw from fabrication techniques in soft robotics and e-textiles to produce shape-shaping textile artefacts that offer versatile on- and off-body interactions. Drawing from the field of machine learning and recent work in Human-Robot Interaction, we will explore ways to interpret sensor data from the devices, enabling the generation of responsive, context-aware behaviours of the interfaces. On the design side, we take a soma design approach to develop technologies that promote genuine connection, using strategies such as mediated touch and shared embodied experiences.
About the Digital Futures Postdoc Fellow
Alice Haynes studied Engineering Mathematics at the University of Bristol, UK, going on to specialize in soft robotics – the development of robotic devices made of soft materials – and haptic interfaces – the development of devices that can produce tactile feedback. Alongside her research, she worked in a local arts charity called KWMC, co-facilitating workshops and teaching fabrication skills in their community makerspace. After defending her PhD thesis in 2022, she moved to Germany for a postdoctoral position in the Human-Computer Interaction Lab at Saarland University. There she explored techniques for fabricating shape-changing textiles that could move and adapt to the body and environment. Increasingly interested in the role of our body and felt experience in interactions with such soft, tactile interfaces, she is excited to bring a soma design approach to this project.
Main supervisor
Kristina Höök, Professor, Division of Media Technology and Interaction Design, KTH
Co-supervisor
Iolanda Leite, Associate Professor, Division of Robotics, Perception and Learning, KTH
About the project
Objective
The purpose of this project is to develop better methods for reconstructing time-resolved medical images with multiple image channels. Traditional methods often process these measurement subsets separately and then combine the results, but this approach doesn’t always lead to the best images. The challenge is to integrate the extra information from the start, during the image reconstruction process itself, in a way that enhances the final result. This is where artificial intelligence (AI) and deep learning come in.
Deep learning models have shown great promise in tackling complex tasks by learning patterns from large amounts of data. However, in medical imaging, data is often scarce, and the computational challenges are significant. A fully data-driven approach is unlikely to succeed. Instead, our project will explore ways to build AI models that incorporate the known relationships between measurement subsets directly into their design. This will allow us to develop efficient and lightweight models that improve image quality while remaining practical to use in real-world clinical settings.
By applying these new methods to spectral CT and PET, we aim to produce medical images with greater diagnostic power, helping doctors detect and treat diseases more effectively. Additionally, our approach will be designed in a flexible, “plug-and-play” manner, so that it can be adapted for other types of imaging in the future. With this research, we hope to take an important step toward more accurate, reliable, and informative medical imaging for patients and healthcare providers alike.
Background
Medical imaging techniques like Computed Tomography (CT) and Positron Emission Tomography (PET) allow doctors to see inside the human body without surgery. However, these images are not captured directly like a photograph. Instead, they are computed from indirect measurements of light particles (photons) passing through the body.
Imagine looking at the shadows cast by an object in different directions and trying to piece together what the object looks like in three dimensions. This is similar to how medical images are reconstructed from projection data. Some examinations build on acquiring multiple image channels, such as in spectral CT where X-ray images are acquired at multiple energy levels, or combining different modalities such as PET and CT. It is also possible to acquire a video sequence of multiple images in rapid succession. Combining these different kinds of information has the potential to improve image quality, making diagnoses more accurate, but how this should be done in an effective way is far from a simple question.
Cross-disciplinary collaboration
Developing novel medical imaging methodology is a highly cross-disciplinary activity which requires involvement of physics, mathematics, computer science, engineering, and medical science. In this project, which is a collaboration between the department of physics (SCI), the department of biomedical engineering and health systems (CBH), and the department of mathematics (SCI) at KTH, we bring together expertise in mathematics and in two different imaging modalities, CT and PET, to develop common methodology that can be applied to multiple medical imaging modalities.
About the project
Objective
The Digital Futures Drone Gymnasium explores the potential of physical and embodied training accessories to support drone programming and their interactions with humans. The project sits at the intersection of mobile robotics, autonomous systems, machine learning, and human-computer interaction, providing tools to study and envision novel relationships between humans and robots.
Training accessories are tools that allow us to better understand how drones can be effectively operated in work and living spaces. Our training accessories physicalise the control mappings of our machines, which results in the distribution of the cognitive load of controlling a drone over the whole body.
Background
The project follows in the footsteps of the earlier DF Demonstrator Project Drone Arena and largely involves the same research team. The PIs are at the forefront of their respective research fields and provide a unique and complementary combination of expertise. The multiple awards and recognitions obtained by Prof. Luca Mottola in the field of aerial drones provide a stepping stone for technical work. Prof. Kristina Höök pioneered a design philosophy named Soma Design of relevance to designing interactions with autonomous or semi-autonomous systems, such as drones.
The drone manufacturer Bitcraze, based in Malmö, supports the project and provides a much-needed industry perspective. Dr. Joseph La Delfa, who was previously part of this research team and is now an industrial post doctoral researcher at Bitcraze, will act as a liaison between the Drone Gymnasium and the company. The project is also supported by Rachael Garrett, a PhD candidate at KTH whose research explores ethics in the design of autonomous systems. She also acts as an international collaborator with the Turing AI World-Leading Fellowship Somabotics: Creatively Embodying AI.
Crossdisciplinary collaboration
Prof. Mottola is an expert in mobile robotics and autonomous systems. He focuses on the concrete realisation of the training accessories across hardware and software. Prof. Kristina Höök is a professor in interaction design, specialising towards designing for movement-based interactions between users and autonomous or semi-autonomous systems.
The expertise of the two PIs join in the organisation of the workshops and interactive exhibitions. Successfully accomplishing the project goals, especially related insights from the workshops that might transfer to other application domains will be blending Höök’s skillset with the system expertise of Mottola.
About the project
Objective
With the OrganoFeed project, we aim to leverage our joint expertise regarding microfluidic engineering & integration, and predictive algorithms development, to help address a core problem in biomedical research: reproducibility. Specifically, we aim to greatly reduce the variability of organoid cultures, which otherwise hold great promise for improving both fundamental research and drug development, by shifting the paradigm from a homogenous chemical environment to individualized, data-driven feedback control.
Background
Organoids are miniaturized, self-assembled, and self-organized cellular constructs. They can recapitulate key morphology, cellular composition, and biological function of human organs, improving greatly upon the simplistic mono-cellular models in use for early drug development. At the same time, organoids’ human origin avoids the species mismatch inherent to animal testing, which currently contributes significantly to poor translatability from drug candidates to human clinical trials (not to mention inherent ethical concerns). Last but not least, being derived from individual human donors’ cell samples, organoids can be used to model both fully personalized response as well as true population-level sampling. Organoids are, however, sensitive to even small variations in their culture conditions over the often weeks-long course of their maturation, resulting in high variability of morphology, cell composition, and function.
Current mitigation approaches have focused on providing more homogenous conditions. We propose instead an entirely different approach, based on feedback-driven control of the chemical environment at the level of each individual organoid. This ability to generate highly homogenous organoid populations should further increase organoids’ attractiveness in replacing both overly simplistic cell models as well as ethically and functionally suspect animal models with something more meaningful.
Crossdisciplinary collaboration
In this project, we are establishing a new cross-disciplinary and complementary collaboration:
- Ioanna Miliou, Assoc. Prof., is an expert in data analysis and modeling, with proficiency in advanced statistical methods, machine learning, and predictive modeling, enabling the extraction of meaningful insights from complex datasets.
- Thomas E Winkler, Assoc. Prof., is an expert in microsystems integration for biomedical applications, such as organs-on-chips, with a focus on electrochemical sensors, microfluidic materials, and human-relevant cells or samples.
- Karolinska Institutet Stem Cell Organoid (KISCO) facility will lend its expertise regarding a range of cutting-edge organoid models and culture methods.
About the project
Objective
This project will explore the neural correlates of human-human and human-robot conversations, with the goal of creating adaptive social robots capable of fostering more meaningful interactions. Social robots can assist people in societal situations such as health care, elderly care, education, public spaces and homes.
Our newly developed telepresence system for human-robot interaction, allowed participants to situate themselves in natural conversations while physically in a functional magnetic resonance imaging scanner. Each participant interacted directly with a human-like robot or ahuman actor while lying in the scanner. In our previous research pairs project, we used this telepresence interface to create the pioneering NeuroEngage fMRI dataset.
This project aims to advance the understanding of conversational engagement by integrating neuroscience, human-robot interaction (HRI), and artificial intelligence. Engagement plays a crucial role in effective communication, yet its underlying brain mechanisms and real-time detection remain largely unexplored. We will use the NeuroEngage dataset and complement it with additional multimodal features like facial expressions, audio embeddings, and detailed annotations of engagement levels. By using multimodal machine learning (MML), this research will develop models capable of detecting and responding to engagement levels in social interactions.
Background
In everyday conversations, a speaker and a listener are involved in a common project that relies on close coordination, requiring each participant’s continuous attention and related engagement. However, current engagement detection methods lack robustness and often rely on superficial behavioral cues, without considering the underlying neural mechanisms that drive engagement. Prior research has demonstrated the feasibility of engagement detection using multimodal signals, but most existing datasets are limited in their scope and do not incorporate neuroimaging data.
In our previous work, by analyzing two different datasets, we have shown that listening to arobot recruits more activity in sensory regions, including auditory and visual areas. We also have observed strong indications that speaking to a human, compared to the robot, recruitsmore activity in frontal regions associated with socio-pragmatic processing, i.e. considering the other’s viewpoint and factoring in what to say next. Additional comparisons of this sort will be enabled by expanding our dataset and refining machine learning models for engagement prediction. As a result, this project will help with AI-driven conversational adaptivity, advancing research in both HRI and neuroscience.
Crossdisciplinary collaboration
The researchers in the team represent the Department of Intelligent Systems, division of Speech Music and Interaction at KTH EECS, the Psychology Department and the Linguistics Department at Stockholm University. This project integrates neuroscience, linguistics, social robotics, and AI to study how humans engage in conversations with both humans and robots.
About the project
Objective
Medical doctors often face difficulty to choose a set of medicines from many options available for a patient. Medication is expected to be disease-specific as well as person-specific. Individual patients may respond differently to the same medication, so the selection of medication should be personalized to everyone’s needs and medical history. In this project, we will explore how AI (artificial intelligence) can help doctors to identify existing medications and/or therapies that can be repurposed for the treatment of dementia.
Dementia is a large-scale health care problem, where around 10% of the population more than 65 years of age suffers from it. Therefore, if AI can assist clinicians in medication selection for dementia patients, it would lead to a significant improvement in the efficiency of treatment. AI can also predict decline (or worsening) of a patient’s health condition over time. Clinicians and healthcare systems will then get precious time to decide life-saving interventions. This heralds use of AI-based medications. The pressing question: can we trust AI systems, mainly its core called machine learning (ML) for patient data analysis and predictions to doctors? Can the ML algorithms explain their predictions to the doctors? In a joint collaboration with Karolinska University Hospital (KUH), Karolinska Institute (KI) and KTH, we will develop trust-worthy ML algorithms with explainable results, and then explore the algorithms to discover new uses for approved medications that were originally developed for different medical conditions.
Background
XML based medication repurposing for dementia (XMLD) refers to the development and application of XML algorithms to identify potential drugs among existing drugs or medications that can be repurposed for the treatment or management of dementia. The goal is to develop XML algorithms to discover new uses for approved drugs or therapies that were originally developed for different medical conditions in patients with dementia. Therefore, for a patient and/or a class of patients, identification of potential drugs among many existing drugs is a variable selection problem, where XML can help.
Therefore, The XML algorithms will be developed to analyze and identify patterns, relationships, and potential associations between drug characteristics, disease severity, and patient outcomes. There are many advantages of medication repurposing for dementia using XML, such as cost and time saving, safety profile, broad range of medication candidates, and improved treatment efficiency. Overall, it addresses a pressing healthcare problem with potentially widespread impact. While our focus is dementia in this project, the accumulated technological knowledge can be used for medication repurposing of many other health problems and diseases in clinics. The proposed XMLD project will establish a strong cooperation between medical doctors and ML researchers in the clinical environment.
Partner Postdoc
Xinqi Bao
Main supervisor
Saikat Chatterjee
Co-supervisor(s)
Martina Scolamiero
About the project
Objective
We develop a novel multimodal imaging database, PelvicMIM, by integrating next-generation digital diagnostic technologies to advance the evaluation of childbirth-related pelvic floor muscle injuries. This effort includes the development and validation of cutting-edge imaging modalities—Shear Wave Elastography (SWE), Magnetic Resonance Elastography (MRE), and Diffusion Tensor Imaging (DTI). These techniques will be applied in vivo to quantify the biomechanical and structural properties of pelvic floor muscles. A deep learning-based image processing framework will be designed for multimodal image registration, enabling the overlay of stiffness maps from MRE/SWE and fiber orientations from DTI onto MRI and ultrasound images. Our proposed approach facilitates cross-modality findings, offering deeper insights into muscle function and injury mechanisms.
Background
One in two middle-aged women suffer from pelvic floor dysfunction such as urinary and fecal incontinence or prolapse of the pelvic organs into the vagina, which profoundly impair quality of life. Injuries to the pelvic floor muscles due to birth are highly associated with pelvic floor dysfunction later in life. Nevertheless, injuries to these muscles, which cannot be surgically repaired, have been largely ignored and poorly studied. The Swedish Agency for Health Technology Assessment, SBU, has identified birth-related injuries to the levator ani muscle (LAM), the three largest muscles of the pelvis, as a priority area for research (April 2019). Although recent research also highlights the urgent need for quantitative assessment of LAM injuries, clinical practice still relies on conventional ultrasound, which lacks the ability to quantify biomechanical or structural properties that are important indicators of soft tissue health. These properties are crucial for the assessment of the LAM, as it is a complex structure of three muscles working together in a sheet-like shape with different layers and fiber directions.
Crossdisciplinary collaboration
The team of researchers is composed of members from the KTH School of Engineering Sciences in Chemistry, Biotechnology and Health, Department of Biomedical Engineering and Health Systems and KTH School of Engineering Science, Department of Engineering Mechanics. The project is conducted in close collaboration with clinical partners at Karolinska University Hospital.