Skip to main content

Neuro-Symbolic Artificial Intelligence: Intelligent Decisions Under Uncertainty

Save to calendar

May 02

Date and time: 2 May 2023, 15:30 – 16:30 CEST (UTC +2)
Speaker: Nils Jansen, Associate Professor, Radboud University Nijmegen
Title: Neuro-Symbolic Artificial Intelligence: Intelligent Decisions Under Uncertainty

Where: Online event

Moderator: Jana Tumova, Associate Professor, Division of Robotics, Perception and Learning at KTH
Administrator: Beatrice Vincenzi,

Watch the recorded presentation:


Abstract: This talk highlights our vision of broad foundational and application-driven research in artificial intelligence (AI), in particular neuro symbolic AI. We take a broad stance on AI that combines formal methods, machine learning, and control theory. As part of this research line, we study problems inspired by autonomous systems, planning in robotics, and industrial applications.

We consider reinforcement learning (RL) as a specific machine learning technique for decision-making under uncertainty. RL generally learns to behave optimally via trial and error. Consequently, and despite its massive success in the past years, RL lacks mechanisms to ensure safe and correct behaviour. Formal methods, in particular formal verification, is a research area that provides formal guarantees of a system’s correctness and safety based on rigorous methods and precise specifications. Yet, fundamental challenges have obstructed the effective application of verification to reinforcement learning. Our main objective is to devise novel, data-driven verification methods that tightly integrate with RL. In particular, we develop techniques that address real-world challenges to the safety of AI systems in general: Scalability, expressiveness, and robustness against the uncertainty that occurs when operating in the real world. The overall goal is to advance the real-world deployment of reinforcement learning.

Bio: Nils Jansen is an Associate Professor with the Institute for Computing and Information Science (ICIS) at Radboud University, Nijmegen, The Netherlands. He received his PhD with distinction from RWTH Aachen University, Germany, in 2015. Before Radboud University, he was a research associate at the University of Texas at Austin. His research is on intelligent decision-making under uncertainty, focusing on formal reasoning about the safety and dependability of artificial intelligence (AI). He holds several grants in academic and industrial settings, including an ERC starting grant with the title: Data-Driven Verification and Learning Under Uncertainty (DEUCE).

Link to speaker webpage:

Link to LinkedIn account: