We will create a web application that takes in a video and produces an output video annotated with identified action classes. In this post, we will explain how to create such an application for human-action recognition (or classification), using pose estimation and LSTM (Long Short-Term Memory). Yoga poses (Natarajasana, Trikonasana or Virabhadrasana) identified based on keypoints detected on the human body The yoga application shown below uses human pose estimation to identify each yoga pose and identifies it is as one of the following asanas – Natarajasana, Trikonasana, or Virabhadrasana. Wouldn’t it be great to use action recognition to automate the evaluation of the video? Well, there’s more you can do with it. The app then evaluates their performance and gives feedback based on how well the user has performed the various yoga asanas (or poses). After watching a video on the app, users can upload the videos of their personal practice sessions. It should offer a list of pre-recorded yoga session videos for users to watch. Let’s say you want to build an application for teaching Yoga online. It is widely applied in diverse fields like surveillance, sports, fitness, and defense. Human action recognition involves analyzing the video footage to predict or classify various actions performed by the person in that video. Action recognition result based on key points detected on the human body
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |