ICLR 2024 Workshop on
Representational Alignment
(Re-Align)

Let’s get aligned on representational alignment between artificial and biological neural systems! What is representational alignment, how should we measure it, and how can it be beneficial for the science of intelligence?


Saturday, May 11th, 2024

collocated with ICLR 2024 in Vienna, Austria

News

All decisions have been sent out and the list of accepted papers and the updated workshop schedule can be found below. We are super excited to see you all in Vienna!


About

The question of What makes a good representation? in machine learning can be addressed in one of several ways: By evaluating downstream behavior, by inspecting internal representations, or by characterizing a system’s inductive biases. Each of these methodologies involves measuring the alignment of an artificial intelligence (AI) system to a ground truth system (usually a human or a population of humans) at some level of analysis (be it behavior, internal representation, or something in between). However, despite this shared goal, the machine learning, neuroscience, and cognitive science communities that study alignment among artificial and biological intelligence systems currently lack a shared framework for conveying insights across methodologies and disciplines.

This workshop aims to bridge this gap by defining, evaluating, and understanding the implications of representational alignment among biological & artificial systems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to contribute to this discussion in the form of invited talks, contributed papers, and structured discussions that address questions such as:

  • How can we measure representational alignment among biological and artificial intelligence (AI) systems?
  • Can representational alignment tell us if AI systems use the same strategies to solve tasks as humans do?
  • What are the consequences (positive, neutral, and negative) of representational alignment?
  • How does representational alignment connect to behavioral alignment and value alignment, as understood in AI safety and interpretability & explainability?
  • How can we increase (or decrease) representational alignment of an AI system?
  • How does the degree of representational alignment between two systems impact their ability to compete, cooperate, and communicate?

In collaboration with other researchers, the organizers have prepared a position paper (đź“„ paper) that collects prior work and highlights key issues on the topic of representational alignment. A concrete goal of the workshop is to expand this paper with any new insights generated during the workshop.


Contributed papers

Below you will find a list of all papers accepted as contributed talks and posters. The papers and their corresponding reviews can be found on the workshop's OpenReview's page.

Contributed Talks

Poster Presentations

Poster session 1:

Poster sessions 2:

Poster Format

Please follow the official ICLR guidelines regarding the size of workshop posters. Note that these are different from posters presented at the main conference.

Remote Presentation

Our goal is to allow every accepted paper to be presented in person. If our assigned space at the conference does not have the capacity for all papers to be presented as posters, we will feature any papers we cannot accommodate via presentation in an asynchronous virtual format (3 minute long videos hosted on online platform). This option will also be available to presenters who are unable to travel to Vienna due to visa issues or restrictions in terms of their available funding. Please contact us as soon as possible if you need to make use of the remote presentation format.


Speakers and Panelists

SueYeon Chung

New York University

Andrew Lampinen

Google DeepMind

Bradley Love

University College London

Mariya Toneva

Max Planck Institute for Software Systems

Schedule
Please see the ICLR.cc virtual site for the most up-to-date schedule. An ICLR 2024 registration is required to access the virtual site.

Organizers

Ilia Sucholutsky

Princeton University

Erin Grant

University College London

Katherine Hermann

Google DeepMind

Jascha Achterberg

University of Cambridge


Sponsor

Contact
Reach out to representational-alignment@googlegroups.com for any questions.
Cover image provided by Fabian Lackner via CC BY-SA 3.0.