Causal Seminar: Sara Magliacane, University of Amsterdam

Hawes Hall, Classroom 201, Harvard Business School

The reading group will meet at 2PM before the seminar.

Causal representation learning in temporal settings with actions

Causal inference reasons about the effect of unseen interventions or external manipulations on a system. Similar to classic approaches to machine learning, it typically assumes that the causal variables of interest are given from the outset. However, real-world data often comprises high-dimensional, low-level observations (e.g., pixels in a video) and is thus usually not structured into such meaningful causal units. Causal representation learning aims at addressing this gap by learning high-level causal variables along with their causal relations directly from raw, unstructured data, e.g. images, videos or text. 

In this talk I will focus on learning causal representations from temporal sequences, e.g. sequences of images that capture the state of an environment. In particular I will describe some of our work in which we leverage perturbations of an underlying system, e.g. the effects of actions performed by an agent in an environment, to provably identify causal variables and their relations from high-dimensional observations up to component-wise transformations and permutations in an unsupervised way. This allows us to apply our methods to realistic simulated environments for embodied AI, in which an agent is performing actions in an environment for which it only receives unstructured high-dimensional observations. In this setting our methods learn a latent representation that allows us to identify individually each causal variable, e.g. the different attributes or states of each object in the environment, as well as learn their interactions and the interactions with the agent in the form of causal relations. By reverse engineering the underlying causal system directly from visual inputs and actions, we can then provide a potential first step towards AI systems that reason about the world causally without supervision.

Discussants

Sara Magliacane, long light around hair falling off to the right, face turned slightly left

Sara Magliacane

Assistant Professor

University of Amsterdam
Informatics Institute