Direkt zum Inhalt springen
login.png Login    |
de | en
Technical University of Munich

Technical University of Munich

Sitemap > Bulletin Board > Diplomarbeiten, Bachelor- und Masterarbeiten > Master's Thesis - LiDAR-guided Monocular 3D Object Detection for Railway Monitoring
up   Back to  News Board    previous   Browse in News  next    

Master's Thesis - LiDAR-guided Monocular 3D Object Detection for Railway Monitoring

15.11.2023, Diplomarbeiten, Bachelor- und Masterarbeiten

SETLabs Research GmbH participates in the project safe.trAIn (https://safetrain-projekt.de/en/) with the focus on 3D Object Detection and Sensor Fusion. One of our partners is Siemens Mobility that provides a dataset of sensor recordings from test drives done by prototypes of future autonomous trains. These recordings, in ROS2 format, contain several sequences of LiDAR point clouds and Camera Images.

In the railway domain there is a special challenge in Object Detection: since a vehicle like a train travels at high speed (usually around 130 km/h or more), such a train with AI-based perception needs to react and break well in advance in case of danger. Therefore, in safe.trAIn precise detections of objects are desirable in the range beyond 400 – 600 meters.
Goals of this thesis are:
Explore railway datasets like the one provided by Siemens Mobility and OSDaR23 (Open Sensor Data for Rail 2023) from Digitale Schiene Deutschland, i.e., become familiar with their sequences of point clouds and images.
Research the State-of-the-Art of Monocular 3D Object Detection with Deep Learning and Sensor Fusion.
Study and understand how the LiDAR sensor can support the monocular 3d Object Detection task, this means how the point cloud data can help to enhance and refine the location and depth estimation of objects in a single camera frame.
Lines of research could be Neural Networks with Attention, Transformers, Teacher-Student or Distillation architectures
Implement a Sensor Fusion baseline, make improvements on it. Experiments could be done using the railway datasets and / or another automotive dataset.
Evaluate quantitatively and qualitatively the results of the implementation, analyzing in detail the benefits of Sensor Fusion, i.e. comparison of without LiDAR vs. LiDAR-guided Monocular 3d Object Detection.
• Requirement: Study background in Computer Vision / Deep Learning / Autonomous Driving from previous courses or lab sessions at TUM.
• Competences: Solid Python and PyTorch programming skills. Docker and ROS 2 would be of advantage.
• Experience with datasets like KITTI, nuScenes, or previous work with 3d perception and LiDAR point clouds would be highly beneficial.

Kontakt: xavier.diaz@setlabs.de

Todays events

10:00 - 13:00

Experimentierclub Robotik & Technik

Calendar of events