IEICE Transactions on Information and Systems
Online ISSN : 1745-1361
Print ISSN : 0916-8532
Ensemble Learning with Heterogeneous Models for Human Activity Recognition in Missing Sensor Scenarios
Ryoichi SEKIGUCHIHiroshi MINOWAKoki YAMASHITAMasaki KAWAKATSU
Author information
JOURNAL FREE ACCESS Advance online publication

Article ID: 2025HCP0004

Details
Abstract

Contemporary smartphones and wearable devices are equipped with various sensors, and considerable research has focused on human activity recognition (HAR) tasks and attempts to infer user context from sensor data. However, a major challenge in sensor-based HAR is the presence of missing sensor data. Depending on the smartphone model, certain sensors may be unavailable, and sensor degradation or failure can also result in partial data loss. In such cases, conventional activity recognition methods exhibit a considerable reduction in accuracy compared with scenarios where all sensors are available.

Thus, this paper proposes a robust activity recognition method that maintains high performance even when some sensor modalities are missing. Specifically, we introduce an ensemble approach that combines three types of models: deep learning, XGBoost, and LightGBM models. Each model is trained for a specific missing sensor pattern, and during the inference process, the system detects the missing sensor type and routes the input to the corresponding model set. The ensemble learning method employs blending by averaging the predicted probabilities for each activity class from the three models.

In this setup, the deep learning model utilizes time-frequency-domain features, whereas the XGBoost and LightGBM models rely on statistical features. By integrating these heterogeneous models, the ensemble compensates for the weaknesses of individual models and achieves an F1-score exceeding 0.80 even when a single sensor modality is missing, and up to 0.84 when all sensors are available. These results demonstrate that the proposed method enables activity estimation that suppresses degradation in recognition performance even when sensors are missing, while accurately recognizing complex user behaviors that cannot be captured by a single model.

Content from these authors
© 2026 The Institute of Electronics, Information and Communication Engineers
Previous article Next article
feedback
Top