[Master's Thesis] - Synthetic Data Generation for Challenging Egocentric Dynamic Neural Novel View Synthesis
22.01.2026, Abschlussarbeiten, Bachelor- und Masterarbeiten
The Human-Centered Computing and Extended Reality Lab of the Professorship for Machine Intelligence in Orthopedics seeks applicants for a Master Thesis
Project DescriptionDynamic novel view synthesis reconstructs environments over time, extending traditional static novel view synthesis techniques. Modern approaches, such as time-aware adaptations of Gaussian Splatting, can reconstruct in high quality. Datasets for dynamic reconstruction exist, but often lack edge cases, or might not be egocentric. As part of a broader effort to reconstruct operating rooms and surgeries, this project aims to generate a synthetic dataset using the Unity game engine. The dataset will be designed to test the boundaries of existing novel view synthesis algorithms and inform further developments. Such challenges could include keeping offscreen content, interpolating trajectories, etc. If possible, these outcomes chould be tested with users in studies to identify beneficial strategies. A baseline project in the Unity game engine to create such videos already exists and should be extended.
Key Research Areas
- Reviewing state-of-the-art dynamic novel view synthesis techniques
- Identifying edge cases in dynamic reconstruction
- Designing and planning edge case scenarios for our dataset
- Implementing synthetic scenes in Unity and generating a structured video output
- Evaluating the dataset using publicly available AI reconstruction methods
- Evaluating different approaches to challenges with users
- Interest in AI-driven reconstruction and novel view synthesis
- Ability to understand and run AI codebases
- Experience with the Unity game engine
- Proficiency in Python and C#
Please send your transcript of records, CV and motivation to: Constantin Kleinbeck (constantin.kleinbeck@tum.de) with CC to hex-thesis.ortho@mh.tum.de
Literature
[1] J. Yan et al., “Instant Gaussian Stream: Fast and Generalizable Streaming of Dynamic Scene Reconstruction via Gaussian Splatting,” in Proceedings of the Computer Vision and Pattern Recognition Conference, 2025, pp. 16520–16531 [2] T. Li et al., “Neural 3D Video Synthesis from Multi-view Video,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA: IEEE, Jun. 2022, pp. 5511–5521. doi: 10.1109/CVPR52688.2022.00544. [3] A. Cao and J. Johnson, “HexPlane: A Fast Representation for Dynamic Scenes,” in 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada: IEEE, Jun. 2023, pp. 130–141. doi: 10.1109/CVPR52729.2023.00021. [4] G. Wu et al., “4D Gaussian Splatting for Real-Time Dynamic Scene Rendering,” in 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, Jun. 2024, pp. 20310–20320. doi: 10.1109/CVPR52733.2024.01920. [5] Y. Duan, F. Wei, Q. Dai, Y. He, W. Chen, and B. Chen, “4D-Rotor Gaussian Splatting: Towards Efficient Novel View Synthesis for Dynamic Scenes,” Jul. 02, 2024, arXiv: arXiv:2402.03307. doi: 10.48550/arXiv.2402.03307. [6] Z. Xu et al., “Representing Long Volumetric Video with Temporal Gaussian Hierarchy,” ACM Trans. Graph., vol. 43, no. 6, pp. 1–18, Dec. 2024, doi: 10.1145/3687919. [7] J. Wu et al., “Swift4D:Adaptive divide-and-conquer Gaussian Splatting for compact and efficient reconstruction of dynamic scene,” Mar. 16, 2025, arXiv: arXiv:2503.12307. doi: 10.48550/arXiv.2503.12307.
Kontakt: constantin.kleinbeck@tum.de


