This project aims to build a system that detects people and recognizes
faces in real time within an XR environment to identify missing
individuals. It is based on Unity and Meta Quest 3, and utilizes
YOLOv8n-face and MobileFaceNet models for inference.
In South Korea, many people go missing every year. However, current information delivery methods for missing persons mainly rely on SMS alerts, requiring individuals to remember and manually identify the faces. This passive structure has limitations due to its dependence on memory. To address this, I designed this project to convert the traditional passive structure into an active recognition system using XR technology.
The entire system is structured as follows:
Figure. Flow of Missing Person Detection System in XR
I was responsible for the overall system planning, model selection, UI/UX design, inference pipeline configuration, testing in Meta Quest environment, and UI development. I especially focused on optimizing model inference speed and improving the accuracy of face embedding comparison.
I learned that integrating an AI model into Unity is only part of the challenge — the true complexity lies in designing and debugging for real-world use. It was my first time integrating AI into an XR environment, and I faced many practical constraints. Through this, I developed stronger problem-solving skills and resilience.