Background and summary of fellowship
Fundamental bounds of information processing systems provide the limit on theoretically possible achievable performances. For instance, in communications, the information-theoretic Shannon capacity describes the fundamental bound on what communication rate can be maximally achieved with vanishing error probability. This fundamental bound can be then used as a benchmark for the actual system design. It is therefore very valuable for the system design assessment of an actual system and the question of additional development work in the system design might be worth it or if a system change for further improvement would be a better strategy. In a privacy and security setting, the fundamental bounds describe what performances an adversary can achieve in the worst case. It therefore can be used to derive security or privacy guarantees which leads to security- or privacy-by-designs. Moreover, the proof of the fundamental bound often reveals what information-processing structure is the most promising strategy. It therefore often provides a deep understanding of information processing and guides towards efficient design structures. The results are often timeless and there are numerous interesting open problems that need to be solved.

In this project, we want to explore fundamental bounds for traditional communication scenarios, source coding setups, distributed-decision making, physical-layer security and privacy as well as statistical learning and data disclosure.

Background and summary of fellowship
One of the main causes of the insecurity of IT systems is complexity. For example, the Linux kernel has been designed to run on all possible platforms (including our IoT light bulb, the largest supercomputer, and the International Space Station) and includes all sorts of features to accommodate several usage scenarios. Even if the kernel is a foundational part of the majority of our software infrastructure and has been developed by high-quality engineers, this complexity results in 30 million lines of code that are virtually impossible to implement correctly. The kernel contains thousands of documented bugs and an unknown number of undiscovered issues. This leaves fertile ground for attackers that can steal our data, use our resources to mine bitcoins, or take complete control of our systems.

We believe that systems should be developed with much more rigorous techniques. We develop methods to mathematically model hardware and software systems and techniques to verify with mathematical precision the impossibility of vulnerabilities. Even if these techniques are heavy-duty, we focus our research on the analysis of the components that constitute the root of trust of the IT infrastructure, with the goal of demonstrating that faults of untrusted applications can be securely contained and that cannot affect the critical part of the system. That is, we do not aim to guarantee that PokenGo is bug-free, but we can mathematically rule out that its bugs can be used to steal your BankID or your crypto wallet. In particular, we are currently focusing on developing the theories to prevent recent famous vulnerabilities (e.g. Spectre) that are caused by low-level processor optimizations.

Background and summary of fellowship
Data has all the information, and efficient processing of data for information extraction is a key to achieving the right decision. Computers help to understand the data, extract important information and then finally provide a decision. Computers use a particular tool from the engineering field of computer science, called machine learning for realizing the help. The use of machine learning is growing, from speech recognition to robots to autonomous cars to medical fields including life science data analysis. Today machine learning is at the core of many intelligent systems across all science and engineering fields. Naturally, machine learning has to be highly reliable. Thanks to the Digital Futures fellowship, I am fortunate to address a challenge in modern machine learning. The challenge is how to make the machine learning fields more trustworthy and unbiased. For example, a visual camera-based face recognition system should not discriminate against people due to skin colour or gender.

Towards the challenge, a prime concern is to develop explainable machine learning (xML) systems, closely related to explainable artificial intelligence (xAI). Preferably, users should be able to understand what can be the precise effect or outcome of the systems before their formal use. Mistakes after formal use are costly in many situations, for example, detection of infection in a clinic/hospital. We should fully understand how computers use data for information extraction, and then reach a decision using the information. In turn, how computers can explain actions to users. The development of xML/xAI requires a confluence of mathematics, computer science and real-life understanding of applications scenarios including user perspectives.

Background and summary of fellowship
Wireless connectivity is a key enabler for the digital transformation of society, and we start to take its availability for granted. Although the wireless data speeds have grown tremendously, we still experience unreliable wireless coverage; for example, video streaming might work flawlessly until it suddenly stalls when you walk around a corner. The Digital Futures fellowship will enable my research group at KTH to tackle this challenge. We need to explore new ways of building wireless network infrastructure to make coverage holes an issue of the past.

Two particular solutions will be explored. The first is to spread out base stations over the city, instead of collecting them in towers, to increase the chance that every location is covered by some of them. The second solution is to make use of reconfigurable “mirrors”, which are thin plates that can be placed on buildings to reflect signals in controllable ways to remove coverage holes. These mirrors are not moved mechanically but change their electrical properties to achieve the same effect. The project will also explore how the “spill energy” from wireless signals can be utilized to power the batteries of devices, particularly internet-of-things equipment that is not operated by humans.

About the project

Objective
The project aims to model and predict the dynamic urban road traffic noise by integrating the data-driven microscopic traffic simulation and instantaneous noise emission and propagation models. It will use passive traffic and publicly available built environment data to demonstrate a high-fidelity dynamic traffic noise simulator with geographic information system (GIS) tools to predict and visualize noise levels at a time scale and geographic granularity previously unattained.

The DIRAC model and tool will pave the way for complex, dynamic urban road traffic noise modeling using passive traffic data, bridge the digital and physical urban world, and support ongoing efforts and collaborative decision-making of noise mitigation measures for a livable and healthy city, particularly for the growing demand of urban mobility for both people and goods in cities.

Background
Noise pollution is increasingly considered a major environmental issue in urban areas, with rapid urbanization (projected to reach 68% of the world’s population by 2050) in the context of the growing demand for mobility for people and goods in cities. It is recognized as a major cause of public health concerns, e.g., annoyance, sleep disturbance, other health effects (e.g., depression, anxiety, and mood swings), and decreased productivity. In particular, road traffic is deemed the major noise source in urban areas. Some 125 million people in the EU (32% of the total population) are estimated to be exposed to harmful traffic noise levels.

Several initiatives have, therefore, followed the European Noise Directive, mandating the development of urban noise maps. However, strategic noise maps exhibit strong limitations in view of noise exposure mitigation measures: long-time (yearly) averages based on traffic flow, static representations, source-rather than receiver-centric, and non-representative of transitioning vehicle fleet. They are limited in modeling and predicting fluctuations of noise levels over time under planning and management interventions, yet they are a fundamental tool to address specific dimensions of human health effects.

Built on the multidisciplinary team’s expertise and project in traffic simulation and traffic noise modeling, the DIRAC project aims to develop and demonstrate a high-fidelity road traffic noise simulation model in urban areas empowered by ubiquitous passive traffic and open-source data and Digital Twin models. For this, data-driven models work in parallel with real-life measurements to reproduce findings and predict the results of response actions. The models are agent-based (ABM) and open-source, enabling city stakeholders to recognize their roles and management models for informative decision-making of noise mitigation measures toward more livable and healthy cities.

Crossdisciplinary collaboration
The researchers in the team represent the KTH School of Architecture and Built Environment (ABE), Civil and Architectural Engineering Department, Transport Planning Division and KTH School of Engineering Science (SCI), Engineering Mechanics Department, The Marcus Wallenberg Laboratory for Sound and Vibration Research. The project is supported by strategic research partners at the KTH Center of Traffic Research (CTR), VTI (Swedish National Road and Transport Research Institute), Linköping University, and the University of Tartu.

About the project

Objective

Background
Despite the enormous investments in automated vehicles, there are still challenges regarding safety and security. Moreover, open research testbeds are lacking to address these challenges. The CAVeaT project will address those needs.

The project will leverage advances and resources made available from industrial partners, from the TECoSA edge-computing and 5G testbed, and from the ITM and EECS schools at KTH, including the AD-EYE platform and an adversarial attack pipeline for autonomous driving simulation.

Crossdisciplinary collaboration
The researchers in the team represent the KTH School of Industrial Engineering and Management and the KTH School of Electrical Engineering and Computer Science.

About the project

Objective
By delivering reliable, local and nearly real-time data about wildlife, the data gathered by FLOX Robotics drones provide insights for data-based wildlife-related decisions to veterinary institutes, nature conservationists, hunting associations, insurance companies and many others. Through AI-assisted identification of wildlife species, the stakeholders have the possibility to track the animal species which are injured, bearing diseases or have been involved in an incident.

The project demonstrates an integrated solution for automated mapping, identification, tracking and, when required, repelling wild animals using autonomous drones with AI-assisted computer vision and ultrasound repellent technology combined with a geographic information system (GIS)-like for data visualization, analysis and decision making.

Background
The problem of wildlife damage is widespread all over the world, from Sweden to Italy, in the US, India and many other countries. Historically, there has been limited means for quantifying the wild animal population, their moving patterns, and the damages they cause. Damage by wild animals to cultivated fields is a major cause of profit loss for farmers in Europe. In Sweden, in 2020, damage caused by wildlife occurred on 17% of the cultivated area for cereals and nearly 28% for starch potatoes. Temporary grasses are Sweden’s largest crop in acreage, and 17% of the cultivated area had some form of wildlife damage in 2020 [www.scb.se]. In Sweden, around 50% of agricultural companies reported damage from the wildstock in 2020, and more than one-third of farmers stated that the wildstock affects their choice of crops.

Wildlife-related damages are widely present not only in agriculture but also in forest areas. The “rooting” and “wallowing” by wild boars also has an environmental impact, destroying vegetation and degrading water quality. For wildlife-related insurance cases, there is often no physical evidence of species involved in the damage. The verification requires too high a burden of proof to be met to receive payments.

The project demonstrates an integrated solution for mapping and, when required, repelling wild animals using autonomous drones with AI-assisted computer vision and ultrasound repellent technology combined with a geographic information system-like (GIS) for data visualization, analysis and decision making. The project solution will help public authorities and decision making bodies to access site-specific identification of wildlife in larger areas, aggregated from separate fields to regional, national and even international levels.

Crossdisciplinary collaboration
The researchers in the team represent RISE Digital Systems and the Department of Computer and System Sciences at Stockholm University.