Abstract
Identifying nursing activity during critical procedures, such as endotracheal suction (ES), is crucial for ensuring patient safety and the quality of received treatment. The expansion of home care requires an increase in the number of certified professionals who can perform endotracheal procedures and provide monitoring during these activities. To fulfill these needs, this study aims to develop an algorithm that is able to recognize ES activities that have the potential to be implemented on edge devices and perform real-time processing of nurse’s pose keypoint extracted from the video using YOLOv7, which is represented in x and y coordinates. The edge device implementation is crucial for health care for ensuring security and privacy, and reducing cost network congestion and latency. In this study, we introduce a combination of a reservoir computing (RC)-based recognition model and large language models (LLMs) to identify nursing activities related to endotracheal suction. RC is suitable for edge device implementation because of its low computational cost requirement and processes temporal features necessary for recognizing nursing activity in real-time. To enhance the performance of RC, we introduce a reservoir computing model with multiple readouts for the recognition model, called RCMRO. The proposed model, which uses LLMs to analyze keypoint data and generate synthetic training data to improve the performance of RCMRO, shows promising performance in distinguishing between various nursing activities. This tool provided healthcare professionals with a prospective method to monitor and evaluate nursing activity in real-time and achieved an accuracy of 70.5% and an F1 score of 68.1% when evaluated by using a test dataset.