2017 年 32 巻 3 号 p. A-G82_1-11
Personalization of activity recognition has become a topic of interest to improve recognition performance for diverse users. Recent researches show that deep neural networks improve generalization performance in activity recognition using inertial sensors, such as accelerometers and gyroscopes; however, personalizing deep neural networks is challenging because it has a thousands or millions of parameters but generally personalization should be done with small amount of labeled data.
This paper proposes novel way to personalize deep neural networks by preventing overfitting using un-labeled data. This is done by adding output-distribution similarity regularization between the reference model and personalized models, which is an extension of distillation recently proposed by Hinton. Experiments on an opportunity activity recognition dataset, one of the most famous datasets in the fields, demonstrates that the proposed regularization techniques prevent overfitting even if we have few labeled data for each target classes per users, and provide better recognition performances compared with other personalization techniques. We also conduct various experiments, including a no-labeled data setting and combination of the proposed method and well-used personalization techniques to check if the proposed method is complementary with existing methods or competitor of its. The results suggest that the proposed regularization works well in various settings, and complementary with existing methods.