International Journal of Activity and Behavior Computing
Online ISSN : 2759-2871
Current issue
Displaying 1-7 of 7 articles from this issue
  • Hoang Anh Vy Ngo, Haru Kaneko, Iqbal Hassan, Elsen Ronando, Milyu ...
    2024 Volume 2024 Issue 3 Pages 1-20
    Published: 2024
    Released on J-STAGE: August 28, 2024
    JOURNAL OPEN ACCESS
    In this paper, we summarize the outcomes of a challenge we organized, where participants were tasked with evaluating nursing performance in endotracheal suctioning (ES) through Human Activity Recognition (HAR) using skeleton and video data combined with Generative AI, aiming to enhance training and improve healthcare delivery. Endotracheal suctioning is a critical procedure in intensive care units, essential for clearing pulmonary secretions from patients with artificial airways, but it carries risks such as bleeding and infection. To aid nursing training programs by evaluating performance during ES, we organized the Activity Recognition of Nurse Training Activity using Skeleton and Video Dataset with Generative AI, as part of the 6th International Conference on Activity and Behavior Computing. Participants were tasked with recognizing 9 activities in ES using skeleton data, with a requirement to utilize Generative AI creatively. The dataset included recordings of ten experienced nurses with over three years of clinical suctioning experience and twelve nursing students from a university performing ES. The challenge, which took place from January 17th to March 23rd, 2024, was assessed based on the average F1 score for all subjects and the quality of the submitted pa- pers. Therefore, Team Seahawk achieved the highest F1 score of 57% by leveraging ChatGPT for feature suggestion, LightGBM for classification, and Optuna for hyperparameter optimization, significantly surpassing the baseline score of 46%.
    Download PDF (464K)
  • Motoki Sakai, Masaki Shuzo
    2024 Volume 2024 Issue 3 Pages 1-19
    Published: 2024
    Released on J-STAGE: August 28, 2024
    JOURNAL OPEN ACCESS
    Endotracheal suctioning is a crucial medical procedure for patients on mechanical ventilation to maintain airflow, but it is an invasive procedure that involves risks for the patient, and requires a high level of skill from the nurses who perform the procedure. Therefore, proper training and technology support are essential to minimize risks. Research by Ms. Ngo et al. focuses on recognizing nurses’ suctioning activities, aiding skill assessment. In the 6th Activity and Behavior Computing (ABC), a challenge competition aims to improve this accuracy using a dataset of key-points and annotations data generated from videos during endotracheal suctioning activity. In this competition, 22 subjects’ datasets were distributed, and participants had to recognize nine activity classes of endotracheal suctioning labeled from 0 to 8. First, we examined tendencies in subjects’ activities captured in videos of endotracheal suctioning to address misclassifications among classes 4, 5, and 0, as well as among classes 1, 2, 3, 6, 8, and 7. Specifically, due to the fixed camera angle and the subjects’ working positions during endotracheal suctioning, we attempted to improve recognition accuracy by incorporating rule-based algorithms into machine learning based on these conditions. Consequently, promising features and rules such as elapsed time, disparity between left and right hand movements, post-processing considering the sequence of endotracheal suctioning execution, among others, were identified. As a result of evaluation, we achieved macro accuracy, precision, recall, and F1-score of approximately 0.859, 0.773, 0.767, and 0.738, respectively. Additionally, we augmented the data of 1, 3, 5, and 6 classes with ChatGPT-4 to improve the activity recognition accuracy of these classes. As a result of the evaluation, an improvement in the recognition accuracy of classes 1, 3, 5, and 6 was observed, finally, the macro activity recognition accuracy, precision, recall and F1-score were 0.858, 0.778, 0.793 and 0.749, respectively.
    Download PDF (5416K)
  • Pengyang Lin, Haoyang Li, Qiqi Xie, Yangke Zhu, Shiyi Li, Gulust ...
    2024 Volume 2024 Issue 3 Pages 1-19
    Published: 2024
    Released on J-STAGE: August 28, 2024
    JOURNAL OPEN ACCESS
    This work aims to explore the application of deep learning models in nursing activity recognition to enhance the accuracy of nurse action recognition. It mainly analyzes a specific nursing activity Endotracheal Suctioning (ES), a medical procedure requiring high precision. To improve the accuracy of daily performance assessments of nursing students performing ES, the research team tested the performance of Feedforward Neural Networks (FNN) and Deep Residual Networks (ResNet) models on this task. The results indicate that the ResNet model, compared to traditional methods and the FNN model, shows superior efficacy in handling complex temporal data and activity recognition tasks without requiring manual feature design typical of traditional models. By employing generative AI techniques, the team was prompted to use data imputation and segmentation methods to optimize model performance further. By evaluating and analyzing model performance, this study not only improves the accuracy of nursing activity recognition with 89% accuracy in 9 different types of ES activities but also demonstrates the potential for deep learning technology's future applications in the nursing field. Additionally, this shows how using generative AI in research can lead to new ways to solve real-world scientific problems. Code is available at https://github.com/lpy888999/6th-Nurse-Care-ABC.
    Download PDF (2122K)
  • Md Ibrahim Mamun, Shahera Hossain, Md Baharul Islam, Md Atiqur Rah ...
    2024 Volume 2024 Issue 3 Pages 1-20
    Published: 2024
    Released on J-STAGE: August 28, 2024
    JOURNAL OPEN ACCESS
    Endotracheal suctioning (ES) is a complex procedure associated with a series of actions and inherent risks, particularly in the intensive care unit (ICU). Given the importance of precise execution, it is preferable to have skilled nurses perform ES tasks. To facilitate nurse training and ensure proficiency in ES procedures, automated nursing activity recognition presents a promising solution, offering benefits in terms of cost, time, and effort. In this paper, we propose a novel approach to nurse training activity recognition for ES tasks, leveraging the capabilities of Generative Artificial Intelligence (GenAI). Specifically, we demonstrate how Large Language Models (LLMs), a subset of GenAI, can enhance the efficiency of nursing activity recognition. By employing LLMs such as OpenAI's Generative Pre-trained Transformer (ChatGPT), Google's Gemini, and Microsoft's Copilot, we aim to improve the accuracy and efficiency of our methodology. Our study identifies a clear gap in the utilization of LLMs for more accurate determination of nursing activities related to ES, with reduced human interaction. Through the integration of approaches and data features suggested by LLMs, we achieve a notable increase in accuracy from baseline 0.51 to 0.58, along with an elevated F1 score from 0.31 to 0.46. These results underscore the potential of LLMs, as a subset of GenAI, to enhance traditional problem-solving efficiency by offering robust solutions and procedures.
    Download PDF (444K)
  • Penglin Jiang, Boyang Dai, Bochen Lyu, Zeng Fan, Gulustan Dogan
    2024 Volume 2024 Issue 3 Pages 1-18
    Published: 2024
    Released on J-STAGE: August 28, 2024
    JOURNAL OPEN ACCESS
    This research is based on the 6th ABC Challenge which focuses on leveraging Human Activity Recognition (HAR) systems to enhance Endotracheal Suctioning (ES) procedures. The challenge’s objective is to accurately identify the activities performed by nurses based on the dataset. The dataset comprising skeleton data and video recordings of healthcare professionals performing ES procedures is collected and preprocessed. Informative features capturing joint angles, velocities, and spatial relationships are extracted. These features are then used as inputs to three different prediction models GBDT, XGBoost, and LightGBM. Our experimental results demonstrate that LightGBM outperforms the other models with the highest accuracy of 0.819, followed by XGBoost (0.807) and GBDT (0.763) on the Nurse Care Activity Recognition Challenge benchmark dataset. These findings contribute to advancing nurse activity recognition and have implications for improving healthcare monitoring and workflow management. Given the outstanding performance of LightGBM, we chose to submit our results using this algorithm for the challenge. The code is available at https://github.com/mobaaa12/Endotracheal-Suctioning-Procedure-Recognition.
    Download PDF (1537K)
  • Samiul Islam, S. M. Hozaifa Hossain, Md. Zasim Uddin, Shahera Hoss ...
    2024 Volume 2024 Issue 3 Pages 1-15
    Published: 2024
    Released on J-STAGE: September 05, 2024
    JOURNAL OPEN ACCESS
    Endotracheal suctioning is a crucial medical process where skilled professional nurses are needed. However, there is a lack of research on automatically recognizing nurse activities during this procedure. This paper presents an innovative method to identify nurses’ actions during endotracheal suctioning procedures by analyzing skeleton data from image sequences using different traditional machine learning-based (ML) methods. Firstly, we preprocess the skeleton data and extract the feature, then employ the ML method for classification. Moreover, we explored the Generative AI with LLM for feature generation and selection to improve the accuracy. We evaluated our proposed framework using metrics such as accuracy and F1-Score. We demonstrate our proposed framework on the Activity Recognition of Nurse Training Activity of the 6th ABC Challenge dataset and find out that XGBoost obtained the best accuracy. The accuracy and F1-score both are 97%. We hope this research contributes to automated nursing activity recognition, potentially benefiting patient care and safety during endotracheal suctioning procedures.
    Download PDF (505K)
  • Tianao Lan, Baiyu Huang, Mingyuan Ji, Carson Moore, Hilmi Demirha ...
    2024 Volume 2024 Issue 3 Pages 1-17
    Published: 2024
    Released on J-STAGE: September 06, 2024
    JOURNAL OPEN ACCESS
    In this paper, we address the difficulties of Human Activity Recognition (HAR) during the critical nursing procedure of Endotracheal Suctioning (ES). This article highlights the utilization of the generative AI and ChatGPT to address data scarcity and imbalance issues for recognizing the actions in the ES surgical process. Our approach involves training a CGAN with raw data, implementing decomposition using PCA, and ultimately employing the Random Forest algorithm as the final classifier. By integrating Computer Vision, Deep Learning, and Machine Learning techniques, we aim to identify nurses' actions during the ES procedure. Our results demonstrate the feasibility of our proposed methods, showcasing substantial improvements above the baseline accuracy and F1 score. It also offers insights for developing intelligent teaching systems and medical safety monitoring applications.
    Download PDF (935K)
feedback
Top