TECoSA seminar: Addressing Uncertainty in the Safety Assurance of Machine-Learning
Date and time: 6 April 2023, 15:00 – 16:00 CEST (UTC +2)
Speaker: Prof Simon Burton, Scientifc Director at Fraunhofer IKS
Title: Addressing Uncertainty in the Safety Assurance of Machine-Learning
Please email email@example.com to register.
This event is jointly organized by TECoSA and Digital Futures
Abstract: There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this presentation, I explore the factors which make the safety assurance of ML such a challenging task. In particular, we address the impact of uncertainty on the confidence in ML safety assurance arguments. I show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of categories and severity of uncertainty as well as an exemplary assurance argument structure, we examine possible defeaters to the assurance claims and, consequently, how the assurance argument can be made more convincing. The analysis combines an understanding of insufficiencies in machine learning models, their causes and mitigating measures with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument.
This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. A combination of qualitative arguments combined with quantitative evidence is required to build a robust argument for safety-related properties of ML functions that are continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed.
The presentation ends with an outlook on both developments in the standardisation of the safety of AI/ML, in particular ISO PAS 8800 Road Vehicles – Safety and AI, as well as open research topics.
Bio: Prof. Dr Simon Burton graduated in computer science at the University of York, where he also achieved his PhD on the topic of the verification of safety-critical software in 2001. Simon has a background in a number of industries. Still, he has spent the last two decades mainly focusing on the automotive sector, working on research and development projects as well as leading consulting, engineering service and product organisations. Most recently, he held the role of Director of Vehicle Systems Safety at Robert Bosch GmbH, where, amongst other things, his efforts were focused on developing strategies for ensuring the safety of automated driving systems.
In September 2020, he joined Fraunhofer IKS in the role of scientific director, where he steers research strategy into “safe intelligence”. His own personal research interests include the safety assurance of complex, autonomous systems and the safety of machine learning. In addition to his role within Fraunhofer IKS, he has the role of honorary visiting professor at the University of York, where he supports a number of research activities and interdisciplinary collaborations. He is also an active member of various standardization committees. He is the convenor of the ISO working group ISO/TC 22/SC 32/WG14, with responsibility for developing an international standard on Safety and AI for road vehicles.