About the project
Objective
DeepFlood aims to develop novel hybrid models and flood maps with water depth information to support real-time decision-making and present them to the Swedish and international scientific society and the stakeholders’ community. The research will be helpful for improving our fundamental understanding of SAR-based flood mapping by developing novel hybrid PolSAR-metaheuristic-DL models.
Background
Precise and fast flood mapping will help water resources managers, stakeholders, and decision-makers in mitigating the impact of floods. Rapid detection of flooded areas and information about water depth are critical for assisting flood responders, e.g., operation specialists, local and state authorities, etc., and increasing preparedness of the broader community through actions such as home risk mitigation and evacuation planning.
This project seeks to fill current knowledge gaps in flood management by enabling accurate and rapid flood mapping and providing water depth information using novel hybrid PolSAR-metaheuristic-DL models and high-resolution remote sensing data. It will also advance flood detection and support notification systems by identifying 1) bands and polarizations that contain the most information for detecting flooded areas in different land covers; 2) the most effective PolSAR features in each band for flood mapping; 3) whether the most informative PolSAR features are the same for different land covers; and 4) which of the widely used metaheuristic and DL models are most efficient for detecting flooded areas and estimate water depth.
Crossdisciplinary collaboration
The researchers in the team represent KTH Royal Institute of Technology and Stockholm University.
About the project
Objective
The main purpose of the DeepAqua project is to quantify the changes in surface water over time. We want to create a real-time monitoring system of changes in water bodies by combining remote sensing technologies, including optical and radar imagery, with deep learning techniques to perform computer vision and transfer learning. Employing this innovative strategy will allow us to calculate the water extent and level dynamics with unprecedented accuracy and response time speed. This approach offers a practical solution for monitoring water extent and level dynamics, making it highly adaptable and scalable for water conservation efforts.

Background
Climate change presents one of the most formidable challenges to humanity. In the current year, we have witnessed unprecedented heatwaves, extreme floods, an increasing scarcity of water in various regions, and a troubling surge in the global extinction of species. Halting the advance of climate change necessitates the preservation of our existing water resources. However, recent advancements in remote sensing technology have yielded a wealth of high-quality data, opening up new avenues for researchers to leverage deep learning (DL) techniques in water detection. DL is a machine learning methodology that consistently outperforms traditional approaches across diverse domains, including computer vision, object recognition, machine translation, and audio processing.
This project, named DeepAqua, seeks to enhance our understanding of surface water dynamics and their response to environmental changes by developing innovative DL architectures, such as Convolutional Neural Networks (CNN) and Transformers, designed specifically for the semantic segmentation of water-related images. It is worth noting that many DL models depend on substantial amounts of ground truth data, which can be costly to obtain. Our previous findings suggest that we can train a CNN using water masks based on the Normalized Difference Water Index (NDWI) to detect water in Synthetic Aperture Radar (SAR) imagery without the need for manual annotation. This breakthrough promises to have a significant impact on water monitoring since generating data based on NDWI masks is virtually cost-free compared to traditional methods involving fieldwork data collection and manual annotation.
Crossdisciplinary collaboration
The researchers in the team represent the Division for Water and Environmental Engineering (SEED/ABE), the Division of Software and Computer Systems (CS/EECS), KTH, and Stockholm University.
About the project
Objective
The project envisions a mobile cyber-physical system where people carrying mobile sensors (e.g., smartphones, smartcards) generate large amounts of trajectory data to sense and monitor human interactions with physical and social environments. The project aims to develop a causal artificial intelligence (AI) methodology to analyze and model human mobility behaviour dynamics (decision-making) using individual travel trajectory data and develop the causal diagrams of human mobility behaviour under disturbances that could help design effective strategies for sustainable and resilient urban mobility systems. The research challenges are learning the complex ‘hidden’ human decision-making mechanism from pervasive ‘observed’ trajectories and developing effective, scalable causal AI models and algorithms.
Background
The ever-changing mobility landscape and climate change continue challenging existing operating models and the responsiveness of city planners, policymakers, and regulators. City authorities have growing investment needs that require more focused operations and management strategies that align mobility portfolios to societal goals. The project targets the root cause of traffic (human) and novel analytic techniques to learn and predict human mobility behaviour dynamics from pervasive mobile sensing data that can help cities meet both sustainability challenges (through predicting congestion, emissions, and energy consumption) and improve urban resilience to disruptive events (such as infrastructure failures, natural disasters, or pandemics).
The human mobility area witnessed active developments in two broad but separate fields: transport and computer science. They work with different data, use different methods, and answer different but overlapping questions, i.e., mobility behaviour modelling using ‘small’ data in transport and mobility pattern analysis using ‘big’ data in computer science. A solid bridge between these is beneficial and needed but is still an open challenge. Mobile sensing and information technology have enabled us to collect much mobility trajectory data from human decision-makers. The predictive AI techniques show the potential to learn and predict human mobility using these trajectory data efficiently. However, they continually run up against the limits of what they observe (correlations, not causal relationships), thus hindering any serious applicability for preparedness and response policies for cities without understanding the causal mobility dynamics.
cAIMBER will bridge the two human mobility research streams in Transport Science and Computer Science. Also, it will develop the causal AI methodology, merging the RL and Causal Inference research fields. Integrating interdisciplinary expertise and techniques will derive generalizable insights about human behaviour dynamics that contribute to the scientific communities’ theoretical conceptualization of travel choices and decision-making mechanisms. Practically, cAIMBER conducts extensive empirical analysis using a comprehensive dataset covering different types of system disturbances for seven years. The accumulated knowledge of human mobility under these situational contexts would help city planners and service operators to make more informed decisions for sustainable and resilient travel.
Crossdisciplinary collaboration
The researchers in the team represent the KTH School of Architecture and Built Environment (ABE), Civil and Architectural Engineering Department, Transport Planning Division and KTH School of Engineering Science (SCI), Mathematics Department, Mathematics for Data and AI Division. Strategic research partners at KTH iMobility Lab and MIT Transit Lab support the project.
Watch the recorded presentation at the Digitalize in Stockholm 2023 event:
About the project
Objective
The aim of this project is to analyse the environmental impacts of increased digitalization and the use of Information and Communication Technologies. The project can include both method development and case studies. The impacts will be analysed using life cycle assessment and life cycle thinking. Case studies can vary on different scales and include specific devices, applications and sectoral assessments. Initially, the focus will be on climate impacts and energy use, but it may also be broadened to a larger spectrum of environmental impacts. Assessments will include the direct impacts of ICT but also different types of indirect impacts, including rebound effects.
Background
The ICT sector has an environmental footprint. The future development of this footprint is debated, and it is important that the discussions have a scientific basis. Digitalisation may be a tool for reducing environmental impacts. By improving efficiencies and dematerialising products and services, new ICT applications can reduce the footprints of other sectors. More studies are, however, needed in order to understand when this actually leads to decreased impacts and when there is a risk for indirect rebound effects that increase use and footprints. Environmental life cycle assessment is a standardised method for assessing potential environmental impacts of products, services and functions “from the cradle to the grave”, i.e. from the extraction of raw materials via production and uses to waste management. It is used for analysing the environmental footprints, i.e. the direct impacts, of ICT. It can also be used for analysing different types of indirect effects.
Partner Postdocs
After working in the industry on large-scale refrigeration and heat pump systems and as an entrepreneur with solar pumps, Shoaib Azizi undertook a master’s program in Sustainable Energy Engineering at KTH. He moved to Umeå in northern Sweden for a multi-disciplinary PhD project on energy-efficient renovation of buildings. His PhD included research on the opportunities for digital tools to improve management and energy efficiency in buildings. He defended his thesis “A multi-method Assessment to Support Energy Efficiency Decisions in Existing Residential and Academic Buildings” in September 2021. Now Shoaib is a Digital Futures Postdoc researcher in digitalization and climate impacts at the Department of Sustainable Development, Environmental Science and Engineering (SEED) at KTH. His research involves lifecycle assessment methodology to understand various aspects of digitalization and its impacts on the environment.
Anna Furberg defended her PhD thesis in 2020 at Chalmers University of Technology. Her thesis, titled “Environmental, Resource and Health Assessments of Hard Materials and Material Substitution: The Cases of Cemented Carbide and Polycrystalline Diamond”, involved Life Cycle Assessment (LCA) case studies and method development. After her thesis, she worked at the Norwegian Institute for Sustainability Research, NORSUS, on various LCA projects and, in several cases, as the project leader. In 2022, she was awarded the SETAC Europe Young Scientist Life Cycle Assessment Award, which recognizes exceptional achievements by a young scientist in the field of LCA. Anna has a Digital Futures Postdoc position in digitalization and climate impacts at the Department of Sustainable Development, Environmental Science and Engineering (SEED) at KTH.
Supervisor
Göran Finnveden is a Professor of Environmental Strategic Analysis at the Department of Sustainable Development, Environmental Sciences and Engineering at KTH. He is also the director of the Mistra Sustainable Consumption research program. His research is focused on sustainable consumption and life cycle assessment, and other sustainability assessment tools. The research includes method development and case studies in different areas, including the environmental impacts of ICT.
About the project
Objective
People with autism are a large group at Day Activity Centers, and autism is one the most common neurodevelopment diagnoses that can imply severe disability for many people. The Platform for Smart People (PSP): Understanding Inclusion Challenges to Design and Develop an Independent Living Platform in a Smart Society for and with people with autism project is about creating a platform to make people with autism more independent of help from others in everyday life situations. The focus is on real-life challenges and opportunities at Day Activity Centers.
This will be achieved by co-designing and developing a Platform for Smart People. The platform will include an accessible Augmented Reality app with a Machine Learning framework and Civic Intelligence to advance the current state-of-the-art digitalisation and smart society for people with autism. An iterative co-design process will ensure that requirements for people with autism are met in the platform.
Background
People on the autism spectrum present a particular challenge. Tasks that neurotypical people take for granted to do easily (e.g. planning a day) may be out of the abilities of people with autism who still must live independently and work. To overcome these barriers, there are potential opportunities based on current research and development.
Augmented Reality means that the actual world is augmented with digital objects (graphics, audio, haptics) by detecting actual-world objects, tracking positions, sensing distance and depth, and integrating light settings. Previous research shows the feasibility of using Augmented Reality to help autistic people with social communication skills and independent living tasks.
Machine Learning is a tool that can automate tasks to make the augmented world more accessible, such as identifying real-world objects. However, Machine Learning has a so-called cold-start problem where big data sets are needed to make it useful. To overcome this, a Civic Intelligence component is needed, where staff at Day Activity Centers can contribute with individual adaptations that they know work for each person. The results can have a wide outreach by combining the advances above and integrating them with the Global Public Inclusive Infrastructure research efforts.
Crossdisciplinary collaboration
The partnership is composed of a multidisciplinary team to respond to the corresponding cross-disciplinary responses required of the project. The researchers in the team represent the Department of Computer and Systems Sciences at Stockholm University (SU), the Department of Special Education at SU, and KTH. In addition, several Day Activity Centers in Stockholm are involved. The project Advisory Board consists of representatives from Autism och Aspergerförbundet, Rinkeby-Kista Day Activity Center, IBM, Trace R&D Center and Raising the Floor.
Watch the recorded presentation at the Digitalize in Stockholm 2023 event:
Background and summary of fellowship
Power electronics technology enables efficient electricity usage by controlling electronic devices with digital algorithms. The software-controlled, power-electronic converters have been vastly used in modern society and become a transformational technology for the energy transition. The proliferation of power-electronic converters transforms legacy energy systems with more flexibility and improved efficiency, yet it also brings new security challenges to energy systems. In recent years, power disruptions induced by erratic interactions of converter-based energy assets have been increasingly reported. Methods for the dynamics analysis of power electronic systems are urgently needed to screen instability and security risks in modern energy systems.
This project aims to leverage digital technologies to redefine the paradigm of dynamics analysis for power electronic systems. First, a trustworthy artificial intelligence (AI) modelling framework for converter-based energy assets will be established. Physical-domain knowledge will be combined with the recent advances in machine learning algorithms to make the AI model more reliable. Then, based on the AI models of power converters, a scalable and efficient dynamics analysis approach will be developed for power electronic systems, ranging from single converters to hundreds of thousands of converters. Finally, physics-based models of benchmark energy systems will be built to test the effectiveness of developed models and methods.
Research in the area of power electronics-controlled power systems. Wang is active in the broader community working in the area and will bring further visibility and provide strong leadership.
Xiongfei Wang has been a Professor with the Division of Electric Power and Energy Systems at KTH Royal Institute of Technology since 2022. From 2009 to 2022, he was with the Department of Energy Technology, Aalborg University, where he became an Assistant Professor in 2014, an Associate Professor in 20
Background and summary of fellowship
Fundamental bounds of information processing systems provide the limit on theoretically possible achievable performances. For instance, in communications, the information-theoretic Shannon capacity describes the fundamental bound on what communication rate can be maximally achieved with vanishing error probability. This fundamental bound can be then used as a benchmark for the actual system design. It is therefore very valuable for the system design assessment of an actual system and the question of additional development work in the system design might be worth it or if a system change for further improvement would be a better strategy. In a privacy and security setting, the fundamental bounds describe what performances an adversary can achieve in the worst case. It therefore can be used to derive security or privacy guarantees which leads to security- or privacy-by-designs. Moreover, the proof of the fundamental bound often reveals what information-processing structure is the most promising strategy. It therefore often provides a deep understanding of information processing and guides towards efficient design structures. The results are often timeless and there are numerous interesting open problems that need to be solved.
In this project, we want to explore fundamental bounds for traditional communication scenarios, source coding setups, distributed-decision making, physical-layer security and privacy as well as statistical learning and data disclosure.