A man with short brown hair and blue eyes wearing a dark suit jacket and white shirt, standing in front of a blurred, neutral-coloured background.

Oscillations, Scan-Compatible Designs, and Surrogate Gradients: Building Blocks for Flexible and Efficient Learning

Date and time: Thursday 4 December 2025, 13:00-14:00 CET
Speaker: Prof. Dr. Sebastian Otte, University of Luebeck
Title: Oscillations, Scan-Compatible Designs, and Surrogate Gradients: Building Blocks for Flexible and Efficient Learning

Where: Digital Futures hub, Osquars Backe 5, floor 2 at KTH main campus OR Zoom
Directionshttps://www.digitalfutures.kth.se/contact/how-to-get-here/
OR
Zoomhttps://kth-se.zoom.us/j/69560887455

Host: Arvind Kumar arvkumar@kth.se

A man with short brown hair and blue eyes wearing a dark suit jacket and white shirt, standing in front of a blurred, neutral-coloured background.

Bio: Sebastian Otte is a professor at the Institute for Robotics and Cognitive Systems at the University of Lübeck, Germany, and head of the Adaptive AI research group. He pursued his doctoral research from 2013 to 2017 in the Cognitive Systems group led by Prof. Dr. Andreas Zell at Eberhard Karls University in Tübingen, receiving his Doctor of Natural Sciences (Dr. rer. nat.) in computer science in 2017. Subsequently, he joined the Neuro-Cognitive Modeling group under Prof. Dr. Martin Butz also in Tübingen.

In 2020, he served as a substitute professor for the professorship Distributed Intelligence at the same univesity. From 2022 to 2023 he was guest researcher in the Machine Learning group of Prof. Dr. Sander Bohté at the Centrum Wiskunde & Informatica (CWI) in Amsterdam, Netherlands, supported by the Alexander von Humboldt Foundation with a Feodor Lynen Research Fellowship. In 2023, Sebastian assumed his current role as a professor for robotics at the University of Lübeck.

Abstract: As artificial intelligence systems increasingly operate in dynamic, resource-constrained environments, they must adopt fundamentally different learning principles. How can we design AI models that are powerful yet capable of adapting efficiently, in ways more closely aligned with natural intelligence?

This talk outlines a research trajectory focused on developing learning systems that combine flexibility across time scales and structures with efficiency in computation, energy, and representation. Inspired by neuroscience and dynamical systems, our research centers on recurrent architectures as a core computational principle, reflecting the deeply recurrent nature of the brain across spatial and temporal levels.

We explore architectural principles that promote modularity, sparsity, temporal abstraction, and local plasticity. I will highlight selected works that exemplify this perspective, including innovations in minimal and scan-compatible recurrent networks, spiking models with oscillatory dynamics, and recent refinements of foundational components like activation functions. These contributions serve as building blocks in a broader vision for flexible, scalable, and more sustainable learning, grounded in the temporal structure of the world.

Date and time

December 4, 2025, 13:00 - 14:00

Location

Digital Futures hub, Osquars Backe 5, floor 2 at KTH main campus OR Zoom

Topic

Oscillations, Scan-Compatible Designs, and Surrogate Gradients: Building Blocks for Flexible and Efficient Learning

Events & seminars