Direkt zum Inhalt springen
login.png Login join.png Register    |
de | en
MyTUM-Portal
Technische Universität München

Technische Universität München

Sitemap > Schwarzes Brett > Abschlussarbeiten, Bachelor- und Masterarbeiten > Synthetic Data Generation for Dynamic Neural Novel View Synthesis
auf   Zurück zu  Nachrichten-Bereich    vorhergehendes   Browse in News  nächster    

Synthetic Data Generation for Dynamic Neural Novel View Synthesis

16.03.2025, Abschlussarbeiten, Bachelor- und Masterarbeiten

This project focuses on generating synthetic training data for dynamic neural novel view synthesis systems. A particular emphasis is placed on creating edge case test data to evaluate the limits of current methods.

The Human-Centered Computing and Extended Reality Lab of the Professorship for Machine Intelligence in Orthopedics seeks applicants for Bachelor/Master Thesis for the Summer Semester 2025.

Project Description

Dynamic novel view synthesis reconstructs environments over time, extending traditional static novel view synthesis techniques. Modern approaches, such as time-aware adaptations of Gaussian Splatting, can reconstruct in high quality. Datasets for dynamic reconstruction exist, but often lack edge cases. As part of a broader effort to reconstruct operating rooms (ORs), this project aims to generate a synthetic dataset using the Unity game engine. The dataset will be designed to test the boundaries of existing novel view synthesis algorithms.
Key research areas include:

  • Reviewing state-of-the-art dynamic novel view synthesis techniques
  • Designing and planning edge case scenarios for our dataset
  • Implementing synthetic scenes in Unity and generating a structured video output
  • Evaluating the dataset using publicly available AI reconstruction methods
Recommended background (or motivation in learning):
  • Interest in AI-driven reconstruction and novel view synthesis
  • Experience with the Unity game engine
  • Ability to understand and run AI codebases
  • Proficiency in Python and C#

Please send your transcript of records, CV and motivation to: Constantin Kleinbeck (constantin.kleinbeck@tum.de) with CC to hex-thesis.ortho@mh.tum.de


Literatur

[1] A. Pumarola, E. Corona, G. Pons-Moll, and F. Moreno-Noguer, “D-NeRF: Neural Radiance Fields for Dynamic Scenes,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA: IEEE, Jun. 2021, pp. 10313–10322. doi: 10.1109/CVPR46437.2021.01018.
[2] T. Li et al., “Neural 3D Video Synthesis from Multi-view Video,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA: IEEE, Jun. 2022, pp. 5511–5521. doi: 10.1109/CVPR52688.2022.00544.
[3] A. Cao and J. Johnson, “HexPlane: A Fast Representation for Dynamic Scenes,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada: IEEE, Jun. 2023, pp. 130–141. doi: 10.1109/CVPR52729.2023.00021.
[4] G. Wu et al., “4D Gaussian Splatting for Real-Time Dynamic Scene Rendering,” in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, Jun. 2024, pp. 20310–20320. doi: 10.1109/CVPR52733.2024.01920.
[5] Y. Duan, F. Wei, Q. Dai, Y. He, W. Chen, and B. Chen, “4D-Rotor Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes,” Jul. 02, 2024, arXiv: arXiv:2402.03307. doi: 10.48550/arXiv.2402.03307.
[6] Z. Xu et al., “Representing Long Volumetric Video with Temporal Gaussian Hierarchy,” ACM Trans. Graph., vol. 43, no. 6, pp. 1–18, Dec. 2024, doi: 10.1145/3687919.

Kontakt: hex-thesis.ortho@mh.tum.de, constantin.kleinbeck@tum.de

Termine heute

no events today.

Veranstaltungskalender