Skip to main content

Embodied Cognition

Save to calendar

Jun 16

Date and time: Tuesday 16 June 2020, 3pm – 4pm
Speaker: Hedvig Kjellström
Presentation material: Hedvig Kjellström_Digital Futures presentation (pdf 3.8 MB)

Abstract: The way humans learn is very much affected by the fact that we have an embodiment – a physical location in the world, and the ability to change the world (both through physical interaction and through spoken and written communication with other agents). Ideas about the interplay between body and mind goes back all the way to the Enlightenment and Immanuel Kant, via the Phenomenology movement of the 1900’s to modern AI, where strategies inspired by Embodied Cognition are used to improve the functionality and learning strategies of artificial embodied systems, both with physical bodies (i.e. autonomous cars, humanoid robots, exoskeletons, search and rescue robots) and virtual ones (i.e. social agents like Siri or Alexa). I like to think about the effect of embodiment on our learning in three related ways:

1. We are able to alter the state of the scene we are observing so as to learn aspects of it that are not apparent from a first look. For example, we can move our head to look from a different angle, or squeeze, push or shake an object to investigate it.

2. Humans have a very limited communication bandwidth compared to the internal computation capacity in the brain. This means that we cannot easily perform reasoning together with other humans in the way a computer cluster can share computations. It also means that communication between humans is heavily under-determined and error-prone.

3. This limited bandwidth also means that we are forced to learn from quite few examples, and are extremely good at transfer learning and abstraction of knowledge. For example, it has been shown that a child can learn to recognize an unseen animal, e.g., elephants from a single simple drawing. This indicates that humans use very different visual learning strategies than state of the art Computer Vision systems.

This has implications for how to design artificial embodied systems, especially systems that should collaborate with, learn from, and solve problems together with humans.

In the context of this, I will outline a few of the projects in my group.

Bio: Hedvig Kjellström is a Professor in the Division of Robotics, Perception and Learning at KTH in Stockholm, Sweden. She received an MSc in Engineering Physics and a PhD in Computer Science from KTH in 1997 and 2001, respectively. The topic of her doctoral thesis was 3D reconstruction of human motion in video. Between 2002 and 2006 she worked as a scientist at the Swedish Defence Research Agency, where she focused on Information Fusion and Sensor Fusion. In 2007 she returned to KTH, pursuing research in activity analysis in video. Her present research focuses on methods for enabling artificial agents to interpret human behavior and reasoning, and also to behave and reason in ways interpretable to humans. These ideas are applied in performing arts, healthcare, veterinary science, and smart society.

In 2010, she was awarded the Koenderink Prize for fundamental contributions in Computer Vision for her ECCV 2000 article on human motion reconstruction, written together with Michael Black and David Fleet. She has written around 100 papers in the fields of Computer Vision, Machine Learning, Robotics, Information Fusion, Cognitive Science, Speech, and Human-Computer Interaction. She is mostly active within Computer Vision, where she is an Associate Editor for IEEE TPAMI and regularly serves as Area Chair for the major conferences.