To main content

Dynamic 3D avatar reconstruction with radiance field

Capturing high quality, realistic 3D models of humans is an active area of research in the computer vision and machine learning communities. It has broad applications in many areas like virtual and augmented reality.

Ill.: Guo, K., et al (2019). "The relightables: Volumetric performance capture of humans with realistic relighting". ACM Transactions on Graphics (ToG), 38(6)

Master Project

In an increasingly “technologified” world, we may worry whether increased use of a home office laptop during a lockdown has predictable, specific, negative (and potentially avoidable) outcomes like back or neck pain. Working out at the gym, we may worry whether an exercise actually helps us get fitter or just makes us tighten up more. Body-worn sensors, sensors in furniture, clothes and floors can help to measure functioning and well-being, improve product design, create better work and exercise routines. However, without a good model of us, the users, the value of those data is limited. It is hard then to say, with any degree of certainty, that sitting in a certain way is bad for your posture, or that a chair is poorly designed. Furthermore, Human shape deforms according to articulation, soft-tissue and non-rigid clothing dynamics, which make realistic human avatar extremely challenging. The objective of this project is to design a 3D avatar reconstruction pipeline by combining elements from traditional geometry pipelines with recent advances in deep learning.

Research Topic focus

The master project aims at neural dynamic human avatar reconstruction method using existing 3D capturing rig. The capturing rig consists of multiple 3D-cameras which allow stitching of individual point clouds to create a full 3D representation of the body. The main tasks include,

  • Improving 3D capturing rig for a better data quality
  • Definition of an appropriate deep learning model for neural dynamic avatar reconstruction for improving the traditional geometry pipeline
  • Evaluation of reconstructed avatar
  • Publication

Recommended Prerequisites

Experience with visual processing using deep learning models will be considered an advantage. Experience/courses in machine learning, deep learning and computer vision. Familiarity with Python and DL frameworks (Pytorch preferred).

References

  1. Cai, H., Feng, W., Feng, X., Wang, Y., & Zhang, J. (2022). Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera. arXiv preprint arXiv:2206.15258.
  2. Guo, K., Lincoln, P., Davidson, P., Busch, J., Yu, X., Whalen, M., ... & Izadi, S. (2019). The relightables: Volumetric performance capture of humans with realistic relighting. ACM Transactions on Graphics (ToG), 38(6), 1-19.

Ahmed Mohammed