Counting and Classifying Exercise Postures using 3D Keypoint Model

Haejun Yang Visual Intelligence Lab
About


Media Art | 00:02:07 | 2025

This work estimates the user's 3D keypoint and calculates the joint angle based on the estimated keypoint. The calculated joint angle automatically recognizes the user's motion and measures the number of times. It determines the poses for squats, left arm curls, right arm curls, and side lateral raises in real time, and can record up to the number of times. In addition, it is thought that it will serve as a good assistant for users to exercise by identifying and recording the number of poses with only a webcam and a lightweight AI model without the need for a separate sensor.

Artist

Haejun Yang is a researcher at the Visual Intelligence Lab (VILab) at Chung-Ang University.
Before joining the lab, Hae Jun Yang majored in Physical Education, where he began exploring the intersection between human movement and artificial intelligence. His growing curiosity about AI led him to contact Professor Jongwon Choi, and following an interview, he became a member of VILab.
As part of the lab, Hae Jun Yang has been deepening his expertise in AI through various research projects, with a particular focus on video understanding and generation using Large Multimodal Models (LMMs). His current research aims to integrate both personalized and generic concepts into LMM-based video research, reflecting his interest in bridging human-centered perspectives with advanced computational intelligence.

List