ICLR 2026 Workshop on
Representational Alignment
(Re-Align)

April 27th or 28th, 2026

ICLR 2026

News

The 2026 edition of Re-Align was accepted as a workshop at ICLR 2026!

About

The first two editions of the Workshop on Representational Alignment (Re-Align) at ICLR established a community bringing together researchers from machine learning, neuroscience, and cognitive science to tackle a foundational question: How can we meaningfully compare and align the internal representations of intelligent systems (Sucholutsky et al., 2023)?

Building on this foundation, the third edition of the Re-Align Workshop at ICLR pivots from asking how we measure alignment to what we can conclude from observing alignment. In particular, our next edition brings focus on what affordances alignment makes possible. In other words, what can we do with alignment?

The workshop this year has two interdisciplinary focus areas:

1. Neural control. When does representational alignment allow us to meaningfully intervene on a system’s behavior? In AI, this connects to the goals of mechanistic interpretability and the engineering challenge of building safer, steerable models. In neuroscience, it parallels the long-standing goal of understanding how local neural activity gives rise to global function. By exploring how to identify, control, and even compose representations of specific functions or concepts, we create a shared framework for moving from simply mapping circuits to actively understanding their causal role in both artificial and biological systems.

2. Downstream behavior. While much work in representation learning focuses on acquiring useful base representations, representational alignment enables a new capability: targeted control over how those representations are deployed for specific tasks. This moves us beyond asking “does the model know X?” to “can we steer when and how the model applies X?”. We need tasks that assess whether a system’s features can be dynamically reconfigured to meet novel demands in complex domains like collaboration and communication. We invite contributions exploring how alignment transforms static representations into controllable computational primitives.

In addition, this year we will introduce a new component, the Re-Align Challenge.

Call for papers

We invite the submission of papers for presentation at the workshop. We broadly welcome submissions related to representational alignment among artificial and biological information processing systems. Submissions can come from any area of cognitive science, neuroscience, machine learning, or related fields.

This year we have three submission tracks:

  1. Tiny / short paper track (up to 5 pages) for preliminary findings, position papers, and new ideas
  2. Long paper track (up to 10 pages) for full-length technical research papers
  3. Challenge report track (up to 5 pages) for reports describing findings from the Re-Align Challenge
Important dates

All deadlines are anywhere on earth (AoE) time.

Paper submission deadline Thursday, February 5th, 2026
Reviewing deadline Thursday, February 26th, 2026
Author notification Sunday, March 1st, 2026
Camera-ready copy (CRC) deadline Monday, April 20th, 2026
Submission instructions
Submission portal

All submissions should be made via OpenReview.

Submission format

Submissions should consist of a single PDF that follows the official ICLR LaTeX template but with an edited header that identifies the submission as to the workshop rather than as to the main conference; for convenience, we have thus adapted the official template here.

Prohibition on prior publication

We welcome only papers that have not yet been accepted for publication at other venues. Submission of late-breaking results and unpublished or ongoing work, including works currently under review but not yet accepted at other venues, are welcome.

Interdisciplinarity

During submission, authors will be asked to specify if their submission aligns more with cognitive science, neuroscience, or machine learning; we will try to assign reviewers accordingly. Since this workshop is interdisciplinary with the goal to bring together and enable knowledge transfer between researchers from various communities, we suggest (but do not require) submissions to use the terminology and general formalism described in the position paper that the organizers of this workshop recently co-authored in collaboration with others (📄 paper). (Commentaries and critiques of this framework are also welcome!)

Double-blind policy

All submissions should be fully anonymized. Please remove any identifying information such as author names, affiliations, personalized GitHub links, etc. Links to anonymized content are OK, though we don't require reviewers to examine such content.

Archival policy

Work that has already appeared in a venue (including any workshop, conference, or journal) must be significantly extended to be eligible for workshop submission. Work that is currently under review at another venue or has not yet been published in an archival format as of the date of the Re-Align deadline may be submitted. For example, submissions that are concurrently submitted to ICML, CogSci, and CCN are welcome. We also welcome COSYNE abstracts, which should be extended to our short or long paper format.

Selection criteria

All submissions will undergo peer review by the workshop’s program committee, and editorial decisions will be made by the organizing committee. Accepted abstracts will be chosen based on theoretical or empirical validation, novelty, and suitability to the workshop’s goals.

Presentation format

All accepted abstracts will be invited for presentation in the form of a poster. A few select contributions will additionally be invited as contributed talks. Accepted papers will be posted in a non-archival format on the workshop website.

Please follow the official ICLR guidelines regarding the size of workshop posters.

Remote presentation

Acceptance decisions on submissions will be considered independently from any constraints on the ability of authors to attend ICLR in person. Our goal is to allow every accepted paper to be presented in person. If our assigned space at the conference does not have the capacity for all papers to be presented as posters, we will feature any papers we cannot accommodate via presentation in an asynchronous virtual format to be determined. This option will also be available to presenters who are unable to travel due to visa or funding restrictions.

More details coming in January 2026!

Challenge

Research in representational alignment converges on central questions but diverges in its answers. Building on our successful hackathon from last year, we’re preparing a persistent shared-task challenge promoting transparency, reproducibility, and collaboration in representational alignment research. The challenge provides access to:

  • Standardized stimulus sets and model repositories
  • Efficient implementations of representational comparison measures
  • A leaderboard for friendly competition on representational alignment benchmarks

The challenge report track welcomes both leaderboard submissions and creative/critical contributions related to the challenge, including novel stimulus sets, evaluation metrics, or modeling approaches.

Detailed challenge specifications and leaderboard access coming December 2025 or January 2026!

Invited speakers

David Bau
David Bau

Northeastern University

Arturo Deza
Arturo Deza

Artificio

Alona Fyshe
Alona Fyshe

University of Alberta

Danielle Perszyk
Danielle Perszyk

Amazon AGI SF Lab

Organizers

Dota Dong
Dota Dong

MPI for Psycholinguistics

Stephanie Fu
Stephanie Fu

UC Berkeley

Siddharth Suresh
Siddharth Suresh

University of Wisconsin-Madison and Amazon AGI SF Lab

Program committee

We thank the following reviewers for providing thorough and constructive feedback on submissions to the workshop:

To be announced