Human activity recognition

Model
Digital Document
Publisher
Florida Atlantic University
Description
Human Activity Recognition (HAR) plays a crucial role in various applications, including healthcare, fitness tracking, security, and smart environments, by enabling the automatic classification of human actions based on sensor and visual data. This dissertation presents a comprehensive exploration of HAR utilizing machine learning, sensor-based data, and Fusion approaches. HAR involves classifying human activities over time by analyzing data from sensors such as accelerometers and gyroscopes. Recent advancements in computational technology and sensor availability have driven significant progress in this field, enabling the integration of these sensors into smartphones and other devices. The first study outlines the foundational aspects of HAR and reviews existing literature, highlighting the importance of machine learning applications in healthcare, athletics, and personal use. In the second study, the focus shifts to addressing challenges in handling large-scale, variable, and noisy sensor data for HAR systems. The research applies machine learning algorithms to the KU-HAR dataset, revealing that the LightGBM classifier outperforms others in key performance metrics such as accuracy, precision, recall, and F1 score. This study underscores the continued relevance of optimizing machine learning techniques for improved HAR systems. The study highlights the potential for future research to explore more advanced fusion techniques to fully leverage different data modalities for HAR. The third study focuses on overcoming common challenges in HAR research, such as varying smartphone models and sensor configurations, by employing data fusion techniques.
Model
Digital Document
Publisher
Florida Atlantic University
Description
The ability to recognize human actions is essential for individuals to navigate through their daily life. Biological motion is the primary mechanism people use to recognize actions quickly and efficiently, but their precision can vary. The development of Artificial Neural Networks (ANNs) has the potential to enhance the efficiency and effectiveness of accomplishing common human tasks, including action recognition. However, the performance of ANNs in action recognition depends on the type of model used. This study aimed to improve the accuracy of ANNs in action classification by incorporating biological motion information into the input conditions. The study used the UCF Crime dataset, a dataset containing surveillance videos of normal and criminal activity, and extracted biological motion information with OpenPose, a pose estimation ANN. OpenPose adjusted to create four condition types using the biological motion information (image-only, image with biological motion, only biological motion, and coordinates only) and used either a 3-Dimensional Convolutional Neural Network (3D CNN) or a Gated Recurrent Unit (GRU) to classify the actions. Overall, the study found that including biological motion information in the input conditions led to higher accuracy regardless of the number of action categories in the dataset. Moreover, the GRU model using the 'coordinates only' condition had the best accuracy out of all the action classification models. These findings suggest that incorporating biological motion into input conditions and using numerical format input data can benefit the development of accurate action classification models using ANNs.