ICLR 2025 Workshop on
Representational Alignment
(Re-Align)
April 2025
ICLR 2025 in Singapore
News
About
Both natural and artificial intelligences form representations of the world that they use to reason, make decisions, and communicate. Despite extensive research across machine learning, neuroscience, and cognitive science, it remains unclear what the most appropriate ways are to compare and align the representations of intelligent systems (Sucholutsky et al., 2023). In the second edition of the Workshop on Representational Alignment (Re-Align), we bring together researchers from diverse fields who study representational alignment to make concrete progress on this set of open interdisciplinary problems. We invite researchers across the machine learning, neuroscience, and cognitive science communities to participate in the workshop, and to contribute to the workshop in two ways:
First, in the form of contributed papers that address questions of representational alignment that stem from the following central theme: When and why do intelligence systems learn aligned representations, and how can scientists and engineers intervene on this alignment? Other questions topical for this year’s workshop include:
- To what extent does representational alignment indicate shared computational strategies among biological and artificial systems?
- How have current alignment metrics advanced our understanding of computation, and what measurement approaches should we explore next?
- How can we develop more robust and generalizable measures of alignment that work across different domains and types of representations?
- How can we systematically increase (or decrease) representational alignment among biological and artificial systems?
- What are the implications (positive and negative) of increasing or decreasing representational alignment between systems, on behavioral alignment, value alignment, and beyond?
Second, by participating in our workshop hackathon. Since the first iteration of Re-Align workshop, there have been numerous debates around the metrics that we use to measure representational similarity, which is often taken as a measure of representational alignment (e.g., Cloos et al., 2024; Khosla et al., 2024; Lampinen et al., 2024; Schaeffer et al., 2024). As of now, there is little consensus on which metric best achieves the goal of identifying similarity between systems. The hackathon component of the workshop will be helpful in articulating the consequences of these methodologies by facilitating a common language among researchers, and as a result increase the reproducibility of research in this subdomain.
Call for papers
We invite the submission of papers for presentation at the workshop. We broadly welcome submissions related to representational alignment among artificial and biological information processing systems. Submissions can come from any area of cognitive science, neuroscience, machine learning, or related fields. We will accept both technical papers (theory and/or application) and position papers.
Important dates
All deadlines are anywhere on earth (AoE) time.
Paper submission deadline | Monday, February 3rd, 2025 |
Reviewing deadline | Friday, February 21st, 2025 |
Author notification | Monday, March 3rd, 2025 |
Camera-ready copy (CRC) deadline | Sunday, April 20th, 2025 |
Submission instructions
All submissions should be made via OpenReview.
Submissions should consist of a single PDF that follows the official ICLR LaTeX template but with an edited header that identifies the submission as to the workshop rather than as to the main conference; for convenience, we have adapted the official template here. We welcome short (up to 5 pages + references and appendix) or long (up to 9 pages + references and appendix) technical and position papers that have not yet been accepted for publication at other venues. Submissions of late-breaking results and unpublished or ongoing work, including works currently under review at other venues, are welcome.
During submission, authors will be asked to specify if their submission aligns more with cognitive science, neuroscience, or machine learning; we will try to assign reviewers accordingly. Since this workshop is interdisciplinary with the goal to bring together and enable knowledge transfer between researchers from various communities, we suggest (but do not require) submissions to use the terminology and general formalism described in the position paper that the organizers of this workshop recently co-authored in collaboration with others (đź“„ paper). (Commentaries and critiques of this framework are also welcome!)
Double-blind policy
All submissions should be fully anonymized. Please remove any identifying information such as author names, affiliations, personalized GitHub links, etc. Links to anonymized content are OK, though we don't require reviewers to examine such content.
Archival policy
Work that has already appeared in a venue (including any workshop, conference, or journal) must be significantly extended to be eligible for workshop submission. Work that is currently under review at another venue or has not yet been published in an archival format as of the date of the Re-Align deadline may be submitted. For example, submissions that are concurrently submitted to ICML, CogSci, and CCN are welcome. We also welcome COSYNE abstracts, which should be extended to our short or long paper format.
Selection criteria
All submissions will undergo peer review by the workshop’s program committee, and editorial decisions will be made by the organizing committee. Accepted abstracts will be chosen based on theoretical or empirical validation, novelty, and suitability to the workshop’s goals.
Presentation format
All accepted abstracts will be invited for presentation in the form of a poster. A few select contributions will additionally be invited as contributed talks. Accepted papers will be posted in a non-archival format on the workshop website.
Poster format
Please follow the official ICLR guidelines regarding the size of workshop posters.
Remote presentation
Acceptance decisions on submissions will be considered independently from any constraints on the ability of authors to attend ICLR in person. Our goal is to allow every accepted paper to be presented in person. If our assigned space at the conference does not have the capacity for all papers to be presented as posters, we will feature any papers we cannot accommodate via presentation in an asynchronous virtual format to be determined. This option will also be available to presenters who are unable to travel due to visa or funding restrictions.
Funding for contributors to ICLR 2025
This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short paper submissions, with an eye toward inclusion; see the ICLR page on Tiny Papers for more details. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on the ICLR 2025 conference homepage at the beginning of February, and close on March 2nd.
Hackathon
Details coming soon!
Invited speakers
Organizers
Brian Cheung
MIT
Dota Dong
MPI for Psycholinguistics
Erin Grant
UCL
Ilia Sucholutsky
NYU
Lukas Muttenthaler
TU Berlin
Siddharth Suresh
University of Wisconsin-Madison