Suspicious Action Detection and Recognition in Remote Areas Using AI and Machine Learning Techniques

  • Dhivya Karunya S
  • Krishna Kumar Gopalan College of Engineering and Management, Bengaluru
Keywords: Abandoned luggage, behavior recognition, blob matching, fainting, fighting, loitering

Abstract

The use of video monitoring to detect suspicious behaviors in public transportation zones is gaining popularity. For post-event analysis, such as forensics and riot investigations, automated offline video processing systems have been employed in general. However, there has been relatively little progress in the area of real-time event identification. We present a framework for processing raw video data received from a fixed colour camera set at a specific site and making real-time inferences about the observed activities in this research. First, using a real-time blob matching method, the proposed system gets 3-D object-level information by recognizing and tracking individuals and baggage in the scene. Using object and in turbojet motion characteristics, behaviors and events are semantically detected based on the temporal aspects of these blobs. These supervised machine learning techniques are used to detect and track social distancing between one or more people's movements in public spaces, and these observations can be made using CCTV footage. To show the potential of this technique, a variety of sorts of behavior that are significant to security in public transportation locations have been chosen. Abandoned and stolen items, fighting, fainting, and loitering are examples of these. The experimental findings reported here illustrate the approach's outstanding performance and minimal computing complexity using common public data sets.

Author Biographies

Dhivya Karunya S

Assistant Professor S.E.A CET, Bengaluru

Krishna Kumar, Gopalan College of Engineering and Management, Bengaluru

Professor

References

1. Dr. H S Mohana and Mahanthesha U, “Human action Recognition using STIP Techniques”, International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-9 Issue-7, May 2020
2. J. F. Allen, “Maintaining knowledge about temporal intervals,”
Commun. ACM, vol. 26, no. 11, pp. 832–843, Nov. 1983.
3. C. Fernandez, P. Baiget, X. Roca, and J. Gonzalez, “Interpretation
of complex situations in a semantic-based surveillance framework,”
Image Commun., vol. 23, no. 7, pp. 554–569, Aug. 2008.
4. J. Candamo, M. Shreve, D. B. Goldgof, D. B. Sapper, and R.
Kasturi, “Understanding transit scenes: A survey on human
behavior-recognition algorithms,” IEEE Trans. Intell. Transp. Syst.,
vol. 11, no. 1, pp. 206–224, Mar. 2010.
5. Y. Changjiang, R. Duraiswami, and L. Davis, “Fast multiple object
tracking via a hierarchical particle filter,” in Proc. 10th IEEE ICCV,
2005, vol. 1, pp. 212–219.
6. A. Loza, W. Fanglin, Y. Jie, and L. Mihaylova, “Video object
tracking with differential Structural SIMilarity index,” in Proc.
IEEE ICASSP, 2011, pp. 1405–1408.
7. D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object
tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, no. 5,
pp. 564–577, May 2003.
8. V. Papadourakis and A. Argyros, “Multiple objects tracking in the
presence of long-term occlusions,” Comput. Vis. Image Underst.,
vol. 114, no. 7, pp. 835–846, Jul. 2010.
9. Mahanthesh U, Dr. H S Mohana “Identification of Human Facial
Expression Signal Classification Using Spatial Temporal
Algorithm” International Journal of Engineering Research in
Electrical and Electronic Engineering (IJEREEE) Vol 2, Issue 5,
May 2016
10. NikiEfthymiou, Petros Koutras, Panagiotis, Paraskevas, Filntisis,
Gerasimos Potamianos, Petros Maragos “Multi-View Fusion forAction Recognition in Child-Robot Interaction”: 978-1-4799-7061-
2/18/$31.00 ©2018 IEEE.
11. Nweke Henry Friday, Ghulam Mujtaba, Mohammed Ali Al-garadi,
Uzoma Rita Alo, analysed “Deep Learning Fusion Conceptual
Frameworks for Complex Human Activity Recognition Using
Mobile and Wearable Sensors”: 978-1-5386-1370-2/18/$31.00
©2018 IEEE.
12. Van-Minh Khong, Thanh-Hai Tran, ”Improving human action
recognition with two-stream 3D convolutional neural network”,
978-1-5386-4180-4/18/$31.00 ©2018 IEEE.
13. Nour El Din Elmadany , Student Member, IEEE, Yifeng He,
Member, IEEE, and Ling Guan, Fellow, IEEE ,”Information Fusion
for Human Action Recognition via Biset /Multiset Globality
Locality Preserving Canonical Correlation Analysis” IEEE
TRANSACTIONS ON IMAGE PROCESSING, VOL. 27, NO. 11,
NOVEMBER 2018.
14. Pavithra S, Mahanthesh U, Stafford Michahial, Dr. M Shivakumar,
“Human Motion Detection and Tracking for Real-Time Security
System”, International Journal of Advanced Research in Computer
and Communication Engineering ISO 3297:2007 Certified Vol. 5,
Issue 12, December 2016.
15. Lalitha. K, Deepika T V, Sowjanya M N, Stafford Michahial,
“Human Identification Based On Iris Recognition Using Support
Vector Machines”, International Journal of Engineering Research in
Electrical and Electronic Engineering (IJEREEE) Vol 2, Issue 5,
May 2016
16. RoozbehJafari, Nasser Kehtarnavaz “A survey of depth and inertial
sensor fusion for human action recognition”,
https://link.springer.com/article/10.1007/s11042-015-3177-1,
07/12/2018.
17. Rawya Al-Akam and Dietrich Paulus, ”Local Feature Extraction
from RGB and Depth Videos for Human Action Recognition”,
International Journal of Machine Learning and Computing, Vol. 8,
No. 3, June 2018
18. V. D. Ambeth Kumar, V. D. Ashok Kumar, S. Malathi, K.
Vengatesan and M. Ramakrishnan, “Facial Recognition System for
Suspect Identification Using a Surveillance Camera”, ISSN 1054-
6618, Pattern Recognition and Image Analysis, 2018, Vol. 28, No.
3, pp. 410–420. © Pleiades Publishing, Ltd., 2018.
Published
2021-12-18