Research
Research

Research Statement

My research aims to designing intelligent mobile augmented reality systems for sensing and understanding the physical world. Existing learning-based environment sensing models are often designed with curated training data and evaluation criteria. Neglecting important user, device, and environmental context, many of the learning objectives of existing environmental sensing models are fundamentally flawed in representing the AR device operation environment. My research aims to provide a holistic view of mobile AR sensing system designs to discover novel sensing system designs that are reliable, efficient, and secure.

Current work. My Ph.D. dissertation focuses on developing novel mobile sensing systems for accurate and efficient mobile AR environment lighting estimation, a fundamental AR perception task. Lighting estimation challenges existing mobile sensing system design in its unique way as environment lighting cannot be practically measured based on sensory data. Therefore, it is important to extract environmental information from indirect sensing data.

My research focuses on designing context-aware systems to facilitate lighting estimation with multi-modal sensing and edge-assisted computing. I developed LitAR [5], a system that intelligently combines spatial and temporal sensory information from device dynamics to support challenging lighting estimation goals such as detailed environment reflections. To address the difficulties in evaluating user dynamics, I also developed a photorealistic simulation environment with controllable user parameters. LitAR represents an important systematic effort to embody data-driven learning problems into real-world deployment with considerations of AR contexts.

Efficiency is a high-demanding feature for mobile AR due to the requirement of low latency in content delivery. Leveraging the key insight of system and ML model co-design, my past projects PointAR [2] and Xihe [3] reformulated the environment lighting learning objective and significantly reduced the end-to-end estimation latency while achieved even higher accuracy compared to the state-of-the-art model. As a real-time system, Xihe [3] also applies intelligent control and scheduling policies to ensure system efficiency and stability. With insights derived from information theory, I designed an entropy-based triggering mechanism that efficient and efficient controls lighting estimation workloads based on AR device camera observations. This design bypasses the execution of the lighting estimation workload and proves to be effective under real-world testing.

My research experiences have also inspired me to pioneer multiple real-world applications including: (i) user privacy protection in reflective AR object rendering [7], (ii) authentic portrait photo expression editing with mobile photo sequence data [6], and (iii) And a long-term initiative in ensuring controllable, scalable, and reproducible AR experimentation [1, 4].

Vision for future research. My future research will continue with the goal of bridging data- driven machine-learning models with mobile systems, including context-aware multi-modal sensing system design and system support for large generative model inference on resource constrained devices. The recent developments of large generative models introduces many new opportunities in context-aware spatial computing systems. However, the ever growing sizes of generative model poses serious challenges in bringing general intelligence to mobile systems. My prior experiences promise of my research to continuously advance the area of spatial computing.

References

  1. Ashkan Ganj, Yiqin Zhao, Federico Galbiati, and Tian Guo. 2023. Toward Scalable and Controllable AR Experi- mentation. In Proceedings of the 1st ACM Workshop on Mobile Immersive Computing, Networking, and Systems (Madrid, Spain). https://doi.org/10.1145/3615452.3617941
  2. Yiqin Zhao and Tian Guo. 2020. Pointar: Efficient lighting estimation for mobile augmented reality. In European Conference on Computer Vision. Springer, 678–693.
  3. Yiqin Zhao and Tian Guo. 2021. Xihe: A 3D Vision-Based Lighting Estimation Framework for Mobile Augmented Reality. In Proceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services (MobiSys’21). 13 pages.
  4. Yiqin Zhao and Tian Guo. 2024. Demo: ARFlow: A Framework for Simplifying AR Experimentation Work￿ow. In Proceedings of the 25th International Workshop on Mobile Computing Systems and Applications (, San Diego, CA, USA,) (HOTMOBILE ’24). Association for Computing Machinery, New York, NY, USA, 154. https://doi.org/10.1145/3638550.3643617
  5. Yiqin Zhao, Chongyang Ma, Haibin Huang, and Tian Guo. 2022. LITAR: Visually Coherent Lighting for Mobile Augmented Reality. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–29.
  6. Yiqin Zhao, Rohit Pandey, Yinda Zhang, Ruofei Du, Feitong Tan, Chetan Ramaiah, Tian Guo, and Sean Fanello. 2023. Portrait Expression Editing With Mobile Photo Sequence. In SIGGRAPH Asia 2023 Technical Communications (Sydney, NSW, Australia) (SA ’23). Association for Computing Machinery, New York, NY, USA, Article 17, 4 pages. https://doi.org/10.1145/3610543.3626160
  7. Yiqin Zhao, Sheng Wei, and Tian Guo. 2022. Privacy-preserving Re￿ection Rendering for Augmented Reality. In Proceedings of the 30th ACM International Conference on Multimedia (Lisboa, Portugal) (MM ’22). Association for Computing Machinery, New York, NY, USA, 2909–2918. https://doi.org/10.1145/3503161.3548386

赵一勤 | Yiqin (Pronunciation: Yi-Chin)
CS Ph.D. Candidate @wpicakelab
#SpatialComputing, #MobileComputing, #SensingSystem, #ComputerGraphics

© Yiqin Zhao 2024. Last updated: 3/12/2024. Use my website template.