About the project

Objective
To develop an AI-driven information retrieval system for connecting engineers with existing enterprise design knowledge in a transparent and semantic manner.

Background
Engineers with design experience predating computational tools are retiring. At the same time, widespread and informal use of generative language modeling cheapens documentation, threatening to bury records of human creativity. We work with our industry partner NEKTAB (Nordic Electric Power Technology AB) to use AI-based language models for the structuring and semantic retrieval of multimodal artifacts of engineering design. Rather than generatively guess at answers, our method emphasizes transparency in connecting questions to actual instances of prior documented information, an important feature for preserving engineering knowledge.

Crossdisciplinary collaboration
This involves collaboration between computer scientists and mechanical engineers, and involves fields of natural language processing, data engineering, solid mechanics, and engineering design.

About the project

Objective
The project’s primary goal is to devise strategies for mitigating losses in properties covered by the City of Stockholm’s proprietary insurance firm, St Erik. To lay the foundation for a loss reduction strategy concerning fire and water losses, the project involves the combination and analysis of insurance data and administrative building-related data. This information will be supplemented with details regarding loss reduction measures taken at the individual building level. The results of the analysis will be used to put in place actual loss reduction measures.

Background
The City of Stockholm insures its buildings via St Erik. Among the insured buildings are the three major housing companies, along with the city’s real estate office. The housing companies Stockholmshem AB, Svenska Bostäder AB, and Familjebostäder AB own about 70,000 apartments. Currently, the City of Stockholm’s insurance company is experiencing an upward trend in insured losses. This trend is expected to continue in the future, even due to the impact of climate change. Improving loss prevention measures is crucial to enhancing resilience.

In collaboration with the City of Stockholm and its municipal companies, the project strives to both identify and implement effective loss prevention. The work will contribute to better adapting the City of Stockholm to the consequences of climate change.

Crossdisciplinary collaboration
The project partner is the City of Stockholms insurance company, St Erik AB.

About the project

Objective
The overall objective of the project is to shorten the time needed for securely and safely deploying software updates to vehicles in the automotive industry. The project will address this in the context of software development for heavy vehicles at Scania and the toolchain for automated formal verification of code in the C programming language developed in the previous AVerT research project, called Autodeduct, which is based on the Frama-C code analysis framework and its ACSL contract specification language for C code.

As specific goals, the project will (i) extend Autodeduct with techniques for incremental formal verification as software evolves, (ii) extend the toolchain with support for non-functional requirements in software contracts focusing on safety and security properties relevant to Scania vehicles and including control flow and data flow, (iii) evaluate the techniques as implemented in the new toolchain on relevant code from Scania’s codebase to determine their efficacy, and (iv) develop a case study applying the new toolchain to a realistic software development scenario that demonstrates its applicability in an industrial setting.

Background
In the automotive industry, digitization means that vehicles increasingly depend on and are comprised of software components, leading towards software defined vehicles where most functions are primarily controlled by software. However, vehicle software components need to be continually revised by manufacturers to fix bugs and add functionality, and then deployed to vehicles in operation. Development and deployment of such software updates is currently demanding and time consuming—it may take months or even years for a new software component revision to reach vehicles.

Delayed time to deployment for software updates is in large part due to the long-running processes employed for assuring the revised software system meets its requirements, including legal requirements and requirements on safety and security. Currently, these processes often involve costly analysis of a system in a simulated or real environment, e.g., by executing an extensive suite of regression tests. The time required for running such tests can potentially grow with the size of the whole software system, e.g., as measured by lines of code in the codebase. Regression tests may also fail to consider non-functional properties such as security. The project aims to enable more rapid and trustworthy incremental development of software in heavy vehicles with guarantees of safety and security. Trust is built in the vehicle software development process by adopting tools with rigorous mathematical guarantees.

Crossdisciplinary collaboration
The project partner is Scania CV AB.

Background and summary of fellowship
Behaviour Trees (BTs) represent a hierarchical way of combining low-level controllers for different tasks into high-level controllers for more complex tasks. The key advantages of BTs have been shown to include the following:

In this project, we will use the properties of BTs listed above to synthesize controllers that combine the efficiency of reinforcement learning with formal performance guarantees such as safety and convergence to a designated goal area.

Background and summary of fellowship
Reinforcement Learning (RL) is concerned with learning efficient control policies for systems with unknown dynamics and reward functions. RL plays an increasingly important role in a large spectrum of application domains including online platforms (recommender systems and search engines), robotics, and self-driving vehicles. Over the last decade, RL algorithms, combined with modern function approximators such as deep neural networks, have shown unprecedented performance and have been able to solve very complex sequential decision tasks better than humans. Yet, these algorithms are lacking robustness, and are most often extremely data inefficient.

This research project aims at contributing to the theoretical foundations for the design of data-efficient and robust RL algorithms. To this aim, we develop a fundamental two-step process:

  1. We characterize information-theoretical limits for the performance of RL algorithms (in terms of sample complexity, i.e., data efficiency)
  2. We leverage these limits to guide the design of optimal RL algorithms, algorithms approaching the fundamental performance limits

Background and summary of fellowship
Over the last decade, academia and the industry of networked systems have become more and more interested in novel real-time applications. These applications arise for one in the area of Cyber-Physical Systems (CPS) where essentially time-sensitive processes are to be governed by direct actuation. On the other hand, these applications also arise in the context of providing automated feedback to human users, for instance, in augmented reality as well as cognitive assistance. Such interactive applications are very powerful with respect to their future implications for professional education, ambient intelligence as well as leisure. It is therefore likely that they will have a profound impact on networked systems. Nevertheless, from a fundamental perspective, we have very little understanding of the efficient operations of networked systems for such interactive applications today.

The goal of this project is to provide fundamental performance models for these interactive applications and the operation of underlying networked systems. In contrast to state-of-the-art, our key approach is to capture the essential trade-offs through a novel notion of utility of the received information over time, and subsequently to strive for system optimizations. Central to our application are novel sampling policies, which we derive by leveraging Markov Decision Processes. By this, we aim at providing a cornerstone for the design of future networked systems exposed to interactive applications.

Background and summary of fellowship
Sarunas Girdzijauskas’ research interests are on the intersection of distributed systems and machine learning fields and fall under “Cooperate” and “Learn” research themes, addressing “Smart Society” as well as “Rich and Healthy Life” societal contexts of Digital Futures Strategic Research Programme.

There are many societal problems plaguing current AI services provided by modern Big Tech behemoths, which collect and process user data in a centralized manner. Such data collection and processing inevitably leads to a wide spectrum of issues from data privacy, system security to severe scalability and power consumption issues. Sarunas Girdzijauskas’ research focuses on solutions enabling the transition from classical centralized machine learning to Federated and Decentralized Machine Learning technologies. A particular focus is on developing decentralized architectures for graph analytics and graph machine learning which would enable a wide range of current AI services (e.g., product recommendation systems, social network news feeds etc.) to be provided without the need of centrally collecting data.