Background and summary of fellowship
Power electronics technology enables efficient electricity usage by controlling electronic devices with digital algorithms. The software-controlled, power-electronic converters have been vastly used in modern society and become a transformational technology for the energy transition. The proliferation of power-electronic converters transforms legacy energy systems with more flexibility and improved efficiency, yet it also brings new security challenges to energy systems. In recent years, power disruptions induced by erratic interactions of converter-based energy assets have been increasingly reported. Methods for the dynamics analysis of power electronic systems are urgently needed to screen instability and security risks in modern energy systems.
This project aims to leverage digital technologies to redefine the paradigm of dynamics analysis for power electronic systems. First, a trustworthy artificial intelligence (AI) modelling framework for converter-based energy assets will be established. Physical-domain knowledge will be combined with the recent advances in machine learning algorithms to make the AI model more reliable. Then, based on the AI models of power converters, a scalable and efficient dynamics analysis approach will be developed for power electronic systems, ranging from single converters to hundreds of thousands of converters. Finally, physics-based models of benchmark energy systems will be built to test the effectiveness of developed models and methods.
Research in the area of power electronics-controlled power systems. Wang is active in the broader community working in the area and will bring further visibility and provide strong leadership.
Xiongfei Wang has been a Professor with the Division of Electric Power and Energy Systems at KTH Royal Institute of Technology since 2022. From 2009 to 2022, he was with the Department of Energy Technology, Aalborg University, where he became an Assistant Professor in 2014, an Associate Professor in 20
Background and summary of fellowship
Fundamental bounds of information processing systems provide the limit on theoretically possible achievable performances. For instance, in communications, the information-theoretic Shannon capacity describes the fundamental bound on what communication rate can be maximally achieved with vanishing error probability. This fundamental bound can be then used as a benchmark for the actual system design. It is therefore very valuable for the system design assessment of an actual system and the question of additional development work in the system design might be worth it or if a system change for further improvement would be a better strategy. In a privacy and security setting, the fundamental bounds describe what performances an adversary can achieve in the worst case. It therefore can be used to derive security or privacy guarantees which leads to security- or privacy-by-designs. Moreover, the proof of the fundamental bound often reveals what information-processing structure is the most promising strategy. It therefore often provides a deep understanding of information processing and guides towards efficient design structures. The results are often timeless and there are numerous interesting open problems that need to be solved.
In this project, we want to explore fundamental bounds for traditional communication scenarios, source coding setups, distributed-decision making, physical-layer security and privacy as well as statistical learning and data disclosure.
Background and summary of fellowship
One of the main causes of the insecurity of IT systems is complexity. For example, the Linux kernel has been designed to run on all possible platforms (including our IoT light bulb, the largest supercomputer, and the International Space Station) and includes all sorts of features to accommodate several usage scenarios. Even if the kernel is a foundational part of the majority of our software infrastructure and has been developed by high-quality engineers, this complexity results in 30 million lines of code that are virtually impossible to implement correctly. The kernel contains thousands of documented bugs and an unknown number of undiscovered issues. This leaves fertile ground for attackers that can steal our data, use our resources to mine bitcoins, or take complete control of our systems.
We believe that systems should be developed with much more rigorous techniques. We develop methods to mathematically model hardware and software systems and techniques to verify with mathematical precision the impossibility of vulnerabilities. Even if these techniques are heavy-duty, we focus our research on the analysis of the components that constitute the root of trust of the IT infrastructure, with the goal of demonstrating that faults of untrusted applications can be securely contained and that cannot affect the critical part of the system. That is, we do not aim to guarantee that PokenGo is bug-free, but we can mathematically rule out that its bugs can be used to steal your BankID or your crypto wallet. In particular, we are currently focusing on developing the theories to prevent recent famous vulnerabilities (e.g. Spectre) that are caused by low-level processor optimizations.
Background and summary of fellowship
As data generation increasingly takes place on wireless IoT devices, Artificial Intelligence and Machine Learning (AI/ML) over the Internet of Things (IoT) wireless networks becomes critical. Many studies have shown that state-of-the-art wireless protocols are highly inefficient or unsustainable to support AI/ML services. There is a consensus in the forefront research communities that AI/ML for the connected world is at its infancy and much will have to be investigated in the next decade. In this research project, I will follow a research plan dived into roughly three open research sub-directions:
- Theoretical foundations of distributed AI/ML: I will contribute to making AI/ML theory aware of the characteristics of the wireless networks, and will fundamentally rethink it.
- Theoretical foundations of AI/ML to design wireless networks: I will contribute to radically redesign by AI/ML the future communication protocols for critical societal applications, due to the deficiencies of model-based methods. This includes also the optimisation of the current wireless protocols using AI/ML, which is at the very beginning.
- Theoretical foundations to redesign wireless for supporting AI/ML services: future wireless networks will have to support pervasive AI/ML services. Current communication protocols are highly insufficient for such purposes. I will contribute to establishing fundamentally new wireless protocols and theories, such as “over the air function computations”, to support AI/ML services over IoT.
Background and summary of fellowship
Data has all the information, and efficient processing of data for information extraction is a key to achieving the right decision. Computers help to understand the data, extract important information and then finally provide a decision. Computers use a particular tool from the engineering field of computer science, called machine learning for realizing the help. The use of machine learning is growing, from speech recognition to robots to autonomous cars to medical fields including life science data analysis. Today machine learning is at the core of many intelligent systems across all science and engineering fields. Naturally, machine learning has to be highly reliable. Thanks to the Digital Futures fellowship, I am fortunate to address a challenge in modern machine learning. The challenge is how to make the machine learning fields more trustworthy and unbiased. For example, a visual camera-based face recognition system should not discriminate against people due to skin colour or gender.
Towards the challenge, a prime concern is to develop explainable machine learning (xML) systems, closely related to explainable artificial intelligence (xAI). Preferably, users should be able to understand what can be the precise effect or outcome of the systems before their formal use. Mistakes after formal use are costly in many situations, for example, detection of infection in a clinic/hospital. We should fully understand how computers use data for information extraction, and then reach a decision using the information. In turn, how computers can explain actions to users. The development of xML/xAI requires a confluence of mathematics, computer science and real-life understanding of applications scenarios including user perspectives.
Background and summary of fellowship
Wireless connectivity is a key enabler for the digital transformation of society, and we start to take its availability for granted. Although the wireless data speeds have grown tremendously, we still experience unreliable wireless coverage; for example, video streaming might work flawlessly until it suddenly stalls when you walk around a corner. The Digital Futures fellowship will enable my research group at KTH to tackle this challenge. We need to explore new ways of building wireless network infrastructure to make coverage holes an issue of the past.
Two particular solutions will be explored. The first is to spread out base stations over the city, instead of collecting them in towers, to increase the chance that every location is covered by some of them. The second solution is to make use of reconfigurable “mirrors”, which are thin plates that can be placed on buildings to reflect signals in controllable ways to remove coverage holes. These mirrors are not moved mechanically but change their electrical properties to achieve the same effect. The project will also explore how the “spill energy” from wireless signals can be utilized to power the batteries of devices, particularly internet-of-things equipment that is not operated by humans.
About the project
Objective
Susan’s ride on Campus2030 aims to demonstrate the potential of digitalization in reducing the carbon footprint and improving the cost-efficiency of the construction and transportation industry. With this objective, the project will establish a one-of-a-kind smart road infrastructure demonstrator on the KTH campus Valhallavägen for the integrated design, construction and operation of smart infrastructures. The demonstrator will incorporate a digital twin of the KTH campus, corresponding to multiple models and data sets that enable virtual assessment and experience of the Campus infrastructure while being validated and updated through real-time data feeds from various sensors. Our work in this direction can be seen on our testbed for Intelligent Transportation Systems on www.adeye.se. Susan’s ride will showcase the potential of edge computing, federated learning, and digital twins in the digital transformation of road construction and autonomous vehicle path planning.
Background
Autonomous vehicles, dynamic charging of electric vehicles and vehicle-to-infrastructure communication are just a few examples that require a systemic solution to function sustainably. Making the smart road sustainable requires a partnership between road owners, operators, electricity companies, vehicle manufacturers, transport and logistics companies, and technology suppliers in digitalization. Data will become a fundamental asset in this partnership. They must be collected through a combination of new sensors in the infrastructure already upon construction on smart vehicles, including construction machinery.
Crossdisciplinary collaboration
The researchers in the team represent the School of Electrical Engineering and Computer Science, KTH, the School of Architecture and the Built Environment, KTH and the School of Industrial Engineering and Management, KTH. The project leverages and extends research carried out in the Campus 2030 project and the TECoSA research centre.
Watch the recorded presentation at the Digitalize in Stockholm 2023 event: