Human Illegal Activity Recognition Based on Deep Learning Techniques
Human activity assertion in video is perhaps the most overall applied subjects in the field of picture and video arranging, with different applications in perception (security, sports, and so on), movement ID, video-content-based seeing, man-machine correspondence, and flourishing/handicap care. Hom...
Saved in:
Published in | 2023 IEEE International Conference on Integrated Circuits and Communication Systems (ICICACS) pp. 01 - 07 |
---|---|
Main Authors | , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
24.02.2023
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Human activity assertion in video is perhaps the most overall applied subjects in the field of picture and video arranging, with different applications in perception (security, sports, and so on), movement ID, video-content-based seeing, man-machine correspondence, and flourishing/handicap care. Homes and public locations such as hospitals, shopping malls, railway stations, airports, and so on all have surveillance cameras placed to safeguard the public's well-being and the safety of their personal items. About seven out of ten of the homicides that have been investigated and solved by professionals over the last several decades have benefited from the use of surveillance film in this capacity. In several instances of criminal investigation, the person of interest was located by comparing their picture to others of the same species that were taken by various cameras in various locations. Person Re-IDentification is the process of discovering matches for a person or suspect across several cameras. This method is also known as Person Matching. The Person Re-Identification system, or PID, is a tool that helps keep track of human entries that are captured on video by a variety of cameras in a variety of settings. As a reaction to an image of the person obtained by a camera, the PRID system retrieves relevant pictures of the same person collected by multiple cameras in various places and at various times. This is done as a result of the input image of the person (called a probe image). By comparing images of the same person captured by different cameras, it is able to carry out the process of automating the search for a person or target that has been followed but been lost in a different camera view.. In this assessment, we propose a novel human development insistence methodology utilizing convolutional neural networks (CNN) and perceive the activity performed on that outline annals. Properly, this examination paper will give the required inspiration to seeing human activity possibly reliably (future work). This paper rotates around certification of direct Human improvement utilizing picture dealing with frameworks. Within the field of computer vision, Human Action Recognition (HAR) is one of the most active research areas. It plays a significant part in a broad variety of application fields, including video surveillance, content-based video retrieval, human-machine interaction, gait detection, gesture recognition, video indexing and comprehension, and many more. The use of video surveillance to secure and monitor the public's safety has become more common in today's society. The advancements in the field of surveillance systems have made it possible for the technologies to become a reality. These technologies are used to a wide variety of real world application areas, including smart cities, parking lots, retail malls, ATM centres, and many other places. A visual analyst is responsible for the manual monitoring of the surveillance systems, and anytime an odd incident takes place, the visual analyst is responsible for reporting it to higher authorities. This kind of study of video surveillance footage requires a lot of manpower and might result in mistakes. Because manual monitoring has limitations, the primary objective of HAR is to automatically detect the activities carried out by a person in films in order to circumvent these constraints. |
---|---|
DOI: | 10.1109/ICICACS57338.2023.10099857 |