img-responsive

Introduction


The RoboSense Challenge aims to advance the state of robust robot sensing across diverse robot platforms, sensor types, and challenging sensing environments.

Five distinct tracks are designed to push the boundaries of resilience in robot sensing systems, covering critical tasks such as vision-language fusion for driving, sensor placement optimization, dense SLAM, cross-view matching, and multi-platform sensor adaptation. Each track challenges participants to address real-world conditions such as sensor failures, environmental noise, and adverse weather, ensuring that perception models remain accurate and reliable under various real-world scenarios.

The competition provides participants with datasets and baseline models relevant to each track, facilitating the development of novel algorithms that improve robot sensing performance across both standard and out-of-distribution scenarios. A focus on robustness, accuracy, and adaptability ensures that models developed in this challenge can generalize effectively across different conditions and sensor configurations, making them applicable to a wide range of autonomous systems, including vehicles, drones, and quadrupeds.

img-responsive

Challenge Tracks

There are five tracks in the RoboSense Challenge, with emphasis on the following robust robot sensing topics:

     - Track #1: Robust Driving with Language.
     - Track #2: Robust Sensor Placement.
     - Track #3: Robust Dense SLAM.
     - Track #4: Robust Cross-View Matching.
     - Track #5: Robust Sensor Adaptation.

For additional implementation details, kindly refer to our RoboBEV, RoboDepth, Robo3D, and Place3D projects.

E-mail: robosense2025@gmail.com.



Venue


The RoboSense Challenge is affiliated with the 42nd IEEE Conference on Robotics and Automation (ICRA 2025).

ICRA is IEEE Robotics and Automation Society's flagship conference. ICRA 2025 will be held from May 19th to 23rd, 2025, in Atlanta, USA.

The ICRA competitions provide a unique venue for state-of-the-art technical demonstrations from research labs throughout academia and industry. For additional details, kindly refer to the ICRA 2025 website.



Contact




Timeline


  • Team Up

    Register for your team by filling in this Google Form. 

  • Release of Training and Evaluation Data

    Download the data from the competition toolkit. 

  • Competition Servers Online @ CodaLab

  • Phase One Deadline

    Shortlisted teams are invited to participate in the next phase. 

  • Phase Two Deadline

    Don't forget to include the code link in your submissions. 

  • Award Decision Announcement

    Associated with the ICRA 2025 conference formality. 

Awards


1st Place

Cash $ 5000 + Certificate

  • This award will be given to five awardees; an amount of $ 1000 will be given to each track.

2nd Place

Cash $ 3000 + Certificate

  • This award will be given to five awardees; an amount of $ 600 will be given to each track.

3rd Place

Cash $ 2000 + Certificate

  • This award will be given to five awardees; an amount of $ 400 will be given to each track.

Innovative Award

Certificate

  • This award will be selected by the program committee and given to ten awardees; two per track.


Toolkit





Competition Tracks


Track #1: Robust Driving with Language

This track challenges participants to develop vision-language models that enhance the robustness of autonomous driving systems under real-world conditions, including sensor corruptions and environmental noises.

Participants are expected to design models that fuse driving perception, prediction, and planning with natural language understanding, enabling the vehicle to make accurate, human-like decisions.

Kindly refer to this page for more technical details on this track.

Track Organizers




Track #2: Robust Sensor Placement

This track challenges participants to design LiDAR-based perception models, including those for 3D object detection and LiDAR semantic segmentation, that can adapt to diverse sensor placements in autonomous systems.

Participants will be tasked with developing algorithms that can adapt to and optimize sensor placements, ensuring high-quality perception across a wide range of environmental conditions.

Kindly refer to this page for more technical details on this track.

Track Organizers




Track #3: Robust Dense SLAM

This track challenges participants to develop dense simultaneous localization and mapping (SLAM) models that maintain high accuracy under noisy sensor inputs, such as corrupted RGB-D video data.

Participants are expected to build robust SLAM systems capable of generating high-quality 3D reconstructions and accurate camera trajectories, despite the presence of a diverse set of noises and perturbations in the RGB-D inputs.

Kindly refer to this page for more technical details on this track.

Track Organizers




Track #4: Robust Cross-View Matching

This track aims at the development of models for robust cross-view matching, specifically for scenarios where input data is captured from drastically different viewpoints, such as aerial (drone or satellite) and ground-level images.

Participants are tasked with designing models that can effectively match corresponding visual and textual elements across differing views, even under the presence of corruption such as blurriness, occlusion, or noise.

Kindly refer to this page for more technical details on this track.

Track Organizers




Track #5: Robust Sensor Adaptation

This track focuses on the development of robust event camera perception models that can seamlessly adapt across different robot platforms, including vehicles, drones, and quadrupeds.

Participants are expected to develop algorithms that can effectively adapt event-based perception tasks, specifically semantic segmentation, across platforms that use different sensor configurations and movement dynamics.

Kindly refer to this page for more technical details on this track.

Track Organizers





Evaluation Servers


Track #1

Robust Driving with Language

  • Facilitating driving perception, prediction, and planning robustness with rich language understanding

Track #2

Robust Sensor Placement

  • Focusing on the optimization of sensor placement strategies under challenging driving conditions

Track #3

Robust Dense SLAM

  • Testing the accuracy and resilience of SLAM algorithms in dynamic and unpredictable real-world environments

Track #4

Robust Cross-View Matching

  • Assessing the cross-view matching robustness from multiple perspectives for comprehensive scene perception

Track #5

Robust Sensor Adaptation

  • Tailored for enhancing the robustness of event camera perception models across different robot platforms


FAQs


Please refer to Frequently Asked Questions for more detailed rules and conditions of this competition.




Organizing Team


Challenge Organizers


Lingdong Kong

NUS Logo NUS Computing

Ye Li

UMich Logo UMich Robotics

Xiaohao Xu

UMich Logo UMich Robotics

Feng Xue

UMich Logo UMich Robotics

Shaoyuan Xie

UCI Logo UC, Irvine

Meng Chu

PJLab Logo Shanghai AI Lab

Hanjiang Hu

CMU Logo Carnegie Mellon

Yaru Niu

CMU Logo Carnegie Mellon


Program Committee


Wei Tsang Ooi

NUS Logo NUS Computing

Benoit R. Cottereau

CNRS Logo CNRS & IPAL

Lai Xing Ng

A*STAR Logo A*STAR, I2R

Zhedong Zheng

UMacau Logo University of Macau


Xiaonan Huang

UMich Logo UMich Robotics

Wenwei Zhang

PJLab Logo Shanghai AI Lab

Liang Pan

PJLab Logo Shanghai AI Lab

Ziwei Liu

NTU Logo NTU, S-Lab




Associated Project


This project is affiliated with DesCartes, a CNRS@CREATE program on Intelligent Modeling for Decision-Making in Critical Urban Systems.

img-responsive


Terms & Conditions


This competition is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

1. That the data in this competition comes “AS IS”, without express or implied warranty. Although every effort has been made to ensure accuracy, we do not accept any responsibility for errors or omissions.
2. That you may not use the data in this competition or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
3. That you include a reference to RoboSense (including the benchmark data and the specially generated data for academic challenges) in any work that makes use of the benchmark. For research papers, please cite our preferred publications as listed on our webpage.

To ensure a fair comparison among all participants, we require:

1. All participants must follow the exact same data configuration when training and evaluating their algorithms. Please do not use any public or private datasets other than those specified for model training.
2. The theme of this competition is to probe the out-of-distribution robustness of autonomous driving perception models. Theorefore, any use of the corruption and sensor failure types designed in this benchmark is strictly prohibited, including any atomic operation that is comprising any one of the mentioned corruptions.
3. To ensure the above two rules are followed, each participant is requested to submit the code with reproducible results before the final result is announced; the code is for examination purposes only and we will manually verify the training and evaluation of each participant's model.