ICLR 2024 Workshop on
Representational Alignment
(Re-Align)

Let’s get aligned on representational alignment between artificial and biological neural systems! What is representational alignment, how should we measure it, and how can it be beneficial for the science of intelligence?


Saturday, May 11th, 2024

collocated with ICLR 2024 in Vienna, Austria

News

All decisions have been sent out and the list of accepted papers and the updated workshop schedule can be found below. We are super excited to see you all in Vienna!


About

The question of What makes a good representation? in machine learning can be addressed in one of several ways: By evaluating downstream behavior, by inspecting internal representations, or by characterizing a system’s inductive biases. Each of these methodologies involves measuring the alignment of an artificial intelligence system to a ground truth system (usually a human or a population of humans) at some level of analysis (be it behavior, internal representation, or something in between). However, despite this shared goal, the machine learning, neuroscience, and cognitive science communities that study alignment among artificial and biological intelligence systems currently lack a shared framework for conveying insights across methodologies and disciplines.

This workshop aims to bridge this gap by defining, evaluating, and understanding the implications of representational alignment among biological & artificial systems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to contribute to this discussion in the form of invited talks, contributed papers, and structured discussions that address questions such as:

  • How can we measure representational alignment among biological and artificial intelligence (AI) systems?
  • Can representational alignment tell us if AI systems use the same strategies to solve tasks as humans do?
  • What are the consequences (positive, neutral, and negative) of representational alignment?
  • How does representational alignment connect to behavioral alignment and value alignment, as understood in AI safety and interpretability & explainability?
  • How can we increase (or decrease) representational alignment of an AI system?
  • How does the degree of representational alignment between two systems impact their ability to compete, cooperate, and communicate?

In collaboration with other researchers, the organizers have prepared a position paper (đź“„ paper) that collects prior work and highlights key issues on the topic of representational alignment. A concrete goal of the workshop is to expand this paper with any new insights generated during the workshop.


Accepted papers

Below you will find a list of all papers accepted as contributed talks and posters. The papers and their corresponding reviews can be found on the workshop's OpenReview's page.

Contributed Talks

Poster Presentations

Poster session 1:

Poster sessions 2:

Poster Format

Please follow the official ICLR guidelines regarding the size of workshop posters. Note that these are different from posters presented at the main conference.

Remote Presentation

Our goal is to allow every accepted paper to be presented in person. If our assigned space at the conference does not have the capacity for all papers to be presented as posters, we will feature any papers we cannot accommodate via presentation in an asynchronous virtual format (3 minute long videos hosted on online platform). This option will also be available to presenters who are unable to travel to Vienna due to visa issues or restrictions in terms of their available funding. Please contact us as soon as possible if you need to make use of the remote presentation format.


Speakers and Panelists

SueYeon Chung

New York University

Andrew Lampinen

Google DeepMind

Bradley Love

University College London

Talia Konkle

Harvard University

Mariya Toneva

Max Planck Institute for Software Systems

Schedule
start time duration event theme
8:50 0:10 opening remarks
9:00 0:20 invited talk:
Talia
measuring representational alignment:
What information is captured by measures of representational alignment?
9:20 0:20 invited talk:
Simon
9:40 0:20 discussion + coffee
10:00 0:30 panel
10:30 0:15 contributed talk 1
10:45 0:15 contributed talk 2
11:00 1:00 poster session 1
12:00 2:00 community lunch (sponsored)
14:00 0:20 invited talk:
Andrew
bridging representational spaces:
How can we align the representations of heterogeneous systems?
14:20 0:20 invited talk:
Mariya
14:40 0:35 discussion + coffee
15:15 0:30 panel
15:45 0:15 contributed talk 3
16:00 1:00 poster session 2
17:00 0:20 invited talk:
SueYeon
increasing representational alignment:
Can we optimize directly for representational alignment?
17:20 0:20 invited talk:
Brad
17:40 0:30 discussion + refreshments
18:10 0:30 panel
18:40 0:10 closing remarks

Organizers

Ilia Sucholutsky

Princeton University

Erin Grant

University College London

Katherine Hermann

Google DeepMind

Jascha Achterberg

University of Cambridge


Sponsors

Contact
Reach out to representational-alignment@googlegroups.com for any questions.
Cover image provided by Fabian Lackner via CC BY-SA 3.0.