An Intelligent Method for Human Activity Recognition (HAR) Using Advanced Convolutional Neural Network (CNN) Mode
Tasks related to vision based action recognition are different human activities from the whole movements of those actions. It also helps predict how that individual will behave in the future by drawing conclusions from their present behaviour. It was a subject in past years since it tackles real-wor...
Saved in:
Published in | 2024 1st International Conference on Sustainable Computing and Integrated Communication in Changing Landscape of AI (ICSCAI) pp. 1 - 5 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
04.07.2024
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/ICSCAI61790.2024.10866371 |
Cover
Summary: | Tasks related to vision based action recognition are different human activities from the whole movements of those actions. It also helps predict how that individual will behave in the future by drawing conclusions from their present behaviour. It was a subject in past years since it tackles real-world problems including visual surveillance, driverless automobiles, entertainment, etc. A great deal of research has been done in this domain to develop an efficient human action recognizer. It is also expected that more work will be required. Thus, there are a plethora of applications for human action detection, such as video surveillance and patient monitoring. The Convolutional Neural Network (CNN) models are published in this piece. The results demonstrate strategy which outperforms the conventional two-stream CNN technique by at least 8% in terms of accuracy. Robots with wearable exoskeletons are becoming a promising technology to assist human movements in many activities. Real-time activity detection offers helpful data to improve the robot's control support for routine operations. With the help of two rotary encoders included into the exoskeleton robot and the activity signals from an inertial measurement unit (IMU), a real-time activity detection system is implemented in this study. For the purpose of recognizing activities, five deep learning models in real-time and assessed. Consequently, an edge device was used to assess a subset of refined deep learning models in real-time while using eight typical human actions, which include standing, bending, crouching, walking, sit-down, sit-up, and climbing and descending stairs. With the chosen edge device, these eight robot wearers' behaviours are identified in real-time testing with an average accuracy of 97.35%, an inference time of less than 10ms, and an overall latency of. 506 s per recognition |
---|---|
DOI: | 10.1109/ICSCAI61790.2024.10866371 |