The RoboSense Challenge 2025

Track #5: Cross-Platform 3D Object Detection

πŸ‘‹ Welcome to Track #5: Cross-Platform 3D Object Detection of the 2025 RoboSense Challenge!

Track 5 Image


🎯 Objective

As robotics continues to advance, LiDAR-based 3D object detection has become a focal point in both academia and industry. However, most existing datasets and methods target vehicle platforms, overlooking quadrupeds and drones. This challenge, built on our benchmark, aims to:

  1. Build on three platformsβ€”vehicles, drones, and quadruped robotsβ€”to foster innovations in a unified perception framework;
  2. Bridge geometric and data distribution disparities to achieve rapid model transfer and adaptation across platforms;
  3. Lower annotation and deployment overhead, supporting collaborative sensing for heterogeneous robot teams in urban, disaster, and indoor scenarios.

πŸ—‚οΈ Phases & Requirements

Phase 1: Vehicle β†’ Drone Adaptation

Duration: 15 June 2025 – 15 August 2025

Setup:

  • Source platform: Vehicle LiDAR scans with 3D bounding-box annotations
  • Target platform: Unlabeled Drone LiDAR scans

Ranking Metric: AP@0.50 (R40) for the Car class evaluated on Drone data


Phase 2: Vehicle β†’ Drone & Quadruped Adaptation

Duration: 15 August 2025 – 15 September 2025

Setup:

  • Source platform: Vehicle LiDAR scans with annotations
  • Target platforms: Unlabeled Drone and Quadruped LiDAR scans

Ranking Metric: Weighted score combining:

  • AP@0.50 (R40) for the Car class
  • AP@0.25 (R40) for the Pedestrian class

(Scores computed across both Drone and Quadruped platforms.)


πŸš— Dataset Examples

Track 5 Image

Track 5 Image

Track 5 Image


πŸ› οΈ Baseline Model

In this track, we adopt PV-RCNN as the base 3D detector, and leverage ST3D/++ as our baseline adaptation framework. Detailed environment setup and experimental protocols can be found in the Track5 GitHub repository .

Beyond the provided baseline, participants are encouraged to explore alternative strategies to further boost cross-platform performance:

  • Treat the cross-platform challenge as a domain adaptation problem by improving pseudo-label quality and fine-tuning on target-platform data.
  • Design novel data augmentation techniques to bridge geometric and feature discrepancies across platforms.
  • Adopt geometry-agnostic 3D detectors, such as point-based architectures, that are less sensitive to platform-specific point-cloud characteristics.

πŸ“Š Baseline Results

Phase 1 Results

Metric Car BEV AP0.7@40 Car 3D AP0.7@40 Car BEV AP0.5@40 Car 3D AP0.5@40
PVRCNN-Source 34.60 16.31 40.67 33.70
PVRCNN-ST3D 47.81 26.03 53.40 46.64
PVRCNN-ST3D++ 45.96 25.37 52.65 45.07

πŸ”— Resources

We provide the following resources to support the development of models in this track: