ICLR 2024 Workshop on
Representational Alignment
(Re-Align)

Let’s get aligned on representational alignment between artificial and biological neural systems! What is representational alignment, how should we measure it, and how can it be beneficial for the science of intelligence?


Saturday, May 11th, 2024

collocated with ICLR 2024 in Vienna, Austria

News

🚨 Submission deadline extended to February 8th, 2024!


About

The question of What makes a good representation? in machine learning can be addressed in one of several ways: By evaluating downstream behavior, by inspecting internal representations, or by characterizing a system’s inductive biases. Each of these methodologies involves measuring the alignment of an artificial intelligence system to a ground truth system (usually a human or a population of humans) at some level of analysis (be it behavior, internal representation, or something in between). However, despite this shared goal, the machine learning, neuroscience, and cognitive science communities that study alignment among artificial and biological intelligence systems currently lack a shared framework for conveying insights across methodologies and disciplines.

This workshop aims to bridge this gap by defining, evaluating, and understanding the implications of representational alignment among biological & artificial systems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to contribute to this discussion in the form of invited talks, contributed papers, and structured discussions that address questions such as:

  • How can we measure representational alignment among biological and artificial intelligence (AI) systems?
  • Can representational alignment tell us if AI systems use the same strategies to solve tasks as humans do?
  • What are the consequences (positive, neutral, and negative) of representational alignment?
  • How does representational alignment connect to behavioral alignment and value alignment, as understood in AI safety and interpretability & explainability?
  • How can we increase (or decrease) representational alignment of an AI system?
  • How does the degree of representational alignment between two systems impact their ability to compete, cooperate, and communicate?

In collaboration with other researchers, the organizers have prepared a position paper (📄 paper) that collects prior work and highlights key issues on the topic of representational alignment. A concrete goal of the workshop is to expand this paper with any new insights generated during the workshop.


Call for Papers

We invite the submission of papers for presentation at the workshop. We broadly welcome submissions related to representational alignment among artificial and biological information processing systems, as described above. Submissions can come from any area of cognitive science, neuroscience, machine learning, or related fields. We will accept both technical papers (theory and/or application) and position papers.

Important Deadlines

All deadlines are anywhere on earth (AoE) time.

Submission deadline Thursday, February 8th, 2024
Decision notification Friday, March 1st, 2024
Camera-ready copy deadline Friday, May 3rd, 2024

Submission Instructions

All submissions should be made via OpenReview at https://openreview.net/group?id=ICLR.cc/2024/Workshop/Re-Align.

Submissions should consist of a single PDF that follows the official ICLR 2024 LaTeX template but with an edited header that identifies the submission as to the workshop rather than as to the main conference; for convenience, we have adapted the official template here. We welcome short (up to 5 pages + references and appendix) or long (up to 9 pages + references and appendix) technical or positional papers that have not yet been accepted for publication at other venues. Submissions of late-breaking results and unpublished or ongoing work, including works currently under review at other venues, are welcome.

During submission, authors will be asked to specify if their submission aligns more with cognitive science, neuroscience, or machine learning; we will try to assign reviewers accordingly. Since this workshop is interdisciplinary with the goal to bring together and enable knowledge transfer between researchers from various communities, we suggest (but do not require) submissions to use the terminology and general formalism described in the position paper that the organizers of this workshop recently co-authored in collaboration with others (📄 paper). (Commentaries and critiques of this framework are also welcome!)

Double-blind Policy

All submissions should be fully anonymized. Please remove any identifying information such as author names, affiliations, personalized GitHub links, etc. Links to anonymized content are OK, though we don't require reviewers to peruse such content.

Selection Criteria

Work that has already appeared in a venue (including any workshop, conference, or journal) must be significantly extended to be eligible for workshop submission. Work that is currently under review at another venue or has not yet been published in an archival format as of the date of the Re-Align deadline may be submitted. For example, submissions that are concurrently submitted to ICML 2024 or CogSci 2024 (which both have a deadline the week before ours) are welcome. We also welcome COSYNE 2024 abstracts, which should be extended to our short or long paper format.

All submissions will undergo peer review by the workshop’s program committee, and editorial decisions will be made by the organizing committee. Accepted abstracts will be chosen based on theoretical or empirical validation, novelty, and suitability to the workshop’s goals.

Presentation Format

All accepted abstracts will be invited for presentation in the form of a poster. A few select contributions will additionally be invited as contributed talks. Accepted papers will be posted in a non-archival format on the workshop website.

Remote Presentation

Acceptance decisions on submissions will be considered independently from any constraints on the ability of authors to attend ICLR in person. Our goal is to allow every accepted paper to be presented in person. If our assigned space at the conference does not have the capacity for all papers to be presented as posters, we will feature any papers we cannot accommodate via presentation in an asynchronous virtual format. This option will also be available to presenters who are unable to travel to Vienna due to visa issues or restrictions in terms of their available funding. As a result, the review of papers is purely focused on the quality of any given submission and does not take into account any potential constraints on in-person presentation.


Speakers and Panelists

SueYeon Chung

New York University

Andrew Lampinen

Google DeepMind

Bradley Love

University College London

Talia Konkle

Harvard University

Mariya Toneva

Max Planck Institute for Software Systems

Schedule
start time duration event theme
8:50 0:10 opening remarks
9:00 0:20 invited talk:
Talia
measuring representational alignment:
What information is captured by measures of representational alignment?
9:20 0:20 invited talk:
Simon
9:40 0:40 discussion + coffee
10:20 0:30 panel
10:50 1:10 poster session
12:00 1:30 community lunch (sponsored)
13:30 0:20 invited talk:
Andrew
bridging representational spaces:
How can we align the representations of heterogeneous systems?
13:50 0:20 invited talk:
Mariya
14:10 0:40 discussion + coffee
14:50 0:30 panel
15:20 1:10 poster session
16:30 0:20 invited talk:
SueYeon
increasing representational alignment:
Can we optimize directly for representational alignment?
16:50 0:20 invited talk:
Brad
17:10 0:40 discussion + refreshments
17:50 0:30 panel
18:20 0:10 closing remarks

Organizers

Ilia Sucholutsky

Princeton University

Erin Grant

University College London

Katherine Hermann

Google DeepMind

Jascha Achterberg

University of Cambridge


Sponsors

Contact
Reach out to representational-alignment@googlegroups.com for any questions.
Cover image provided by Fabian Lackner via CC BY-SA 3.0.