It is very effective to extract human behavior and control building facilities based on the behavior in smart buildings that should provide safe, comfortable and convenient environment with energy saving. There are many works to measure traffic of pedestrians using sensor networks with low-power-consumption and inexpensive sensors detecting pedestrians. We propose a novel method to detect numbers and directions of pedestrians walking through a one-person gate with a pyroelectric infrared sensor. The dual element type sensor with Fresnel lens having a pair of detecting areas is attached on a ceiling. Our algorithm can detect the numbers and directions based on patterns and amplitude ratios of successive sensor signal peaks for various passing movements not only walking but also running, standing for a second, filing in vicinity, and passing each other in various temperature conditions. We confirmed that our method could detect them 100% correctly for walking, jogging and filing in vicinity 30cm and over through experiments. Furthermore it could detect them 100% correctly for walking and 90% correctly on average for various passing movements in actual environment experiments in class rooms and a poster session venue.
Many applications based on activity recognition techniques with wearable sensors are currently developed. However, present activity recognition techniques require tons of labeled data to learn. Moreover, the sort of output labels are restricted by learning data, and they sometimes mismatch with applications'purposes. Thus, this study proposes a method to solve these problems by integrating annotation and analysis tools of human activities employing active learningtechniques and hierarchical label definition. Active learning techniques in this method provides efficient and continual label collection. Hierarchical label definition and its dynamical changes provide flexibility of utilization of recognition results. Increased labeling effort by introduction of hierarchical label is relaxed by propagation of changed labels to higher and lower layers. ATLAYA(Annotation and analysis Tool with LAYered activity and Active learning) is implemented as the prototype of proposed method. Evaluation with ATLAYA showed that the proposed method can decrease labeling effort and effectiveness of hierarchical label definition.
Predicting pointing gesture can be an effective way for increasing fluency and naturalness in human-robot interaction. This paper, thus, proposes a method to predict a human pointing gesture. The method predicts the final hand position based on one of the mathematical models of human hand motion called the minimum-jerk model. Analytically, the final position of the hand and its pointing gesture finishing time can be predicted by detecting the first peak of hand acceleration, which corresponds to first 21% of the entire movement. We implemented and evaluated the method using Microsoft Kinect and a desktop size robot named Robovie-W. The result showed that the estimation error was about 18cm in CEP(Circular Error Probability), and implied that the feeling of naturalness could be improved, while it improved the impression for motion of a robot.
If we can detect groups of visitors in public spaces and commercial facilities, we can provide information depending on the attributes of the groups, and we can also provide statistics with regards to the usage of the facilities for the owners of the facilities. The features, such as person-to-person distance and gaze direction, is useful for group detection and have been used in a number of works. However, almost all of the works extract the features from the whole data. This causes miss-detection in some cases. Even when we are walking with a fiend or colleague, we do not interact with the others all of the time. This means that a meaningful information for group detection is embedded within a part of timeseries data, not all of the data. We have to pick up the meaningful information and ignore the others. To this end, we divide whole of the time-series data into a set of data along the time axis. We apply the multiple instance learning (MIL) to find out the meaningful information among the data. The features computed for each time slot are treated as instances in MIL. MIL can extract one or more positive instance from all of the instances, including positive and negative ones. We conducted experiments using two types of data: simulated group actions and actual actions. Our method outperforms the existing method for both of the data.
We propose a method for learning a probabilistic cellular automaton from people trajectories and applies the cellular automaton to people tracking in videos. For learning the probabilistic cellular automaton, we introduce dirichlet smoothing to compensate for the lack of the trajectory data because it is difficult to collect the dense and enough data. Furthermore for tracking people, we develop a data assimilation algorithm to sequentially update the probabilistic cellular automaton using the sequence of images. We demonstrate that the proposed probabilistic cellular automaton provides better tracking performance than the existing models.
It is said that human postural sway in quietly-standing position involves individual differences. From this viewpoint, some contributions have been made for person identification problem. However, current researches on person identification problem based on postural sway data have the following two problems: (1) the most target behavior is the postural sway after completely stepping on a stabilometer, (2) the effect of carrying weight for person identification accuracy is unclear. Therefore in this study we measure postural sway data while stepping on and off a stabilometer as well as standing quietly. In addition, we analyze the effects of shouldering a backpack for person identification accuracy. The results of experiments with 10 human subjects data show that carrying a 2kg backpack affects the identification accuracy, but the postural sway data in shouldering a backpack has a possibility to identify persons. Also, we show some extracted features in stepping on and off intervals have good effect to identify persons.
In online education, the lecturer often finds it hard to grasp the behavior of the students, especially when they are spatially widely distributed and the lecturer cannot see them all together through a single video phone. To overcome this problem, a method of estimating the behavior of the student using posture measurement is proposed. Pressure sensors and an infrared distance sensor were used to record the posture. The recordings showed different distributions of mean and variance as the subject conducted different tasks. Based on these observations, features were constructed from raw recording values. The features enabled successful classi.cation of tasks that the subjects are involved in.
This paper presents a novel bed-leaving sensor that is possible to be installed to various beds. Being made of piezoelectric films, this small and thin sensor requires no power supply for operation. Since the sensors are installed on a bed frame, they are invisible and thus is helpful for privacy protection. Moreover, we have developed a sensing system for monitoring and recognition of behavior patterns. Linear characteristics between loads and output were obtained from a load test for evaluation of the output characteristics of the sensors. Output values vary according to the input speed patterns in the case of the equivalent load. As a directional dependency, the maximum difference of the output voltage is three times. We have conducted an evaluation experiment for ten subjects using our original machine-learning based method. The experimental results on cross validation reveal that the mean recognition accuracy was 92.1% for four behavior patterns, which comprise longitudinal sitting, lateral sitting, terminal sitting, and leaving from the bed. The recognition accuracy of the leaving from the bed is merely 78.3%, although the accuracies of lateral sitting, terminal sitting, and longitudinal sitting were 100%, 98.3%, and 91.7%, respectively. For analysis using the confusion matrix, false recognition is distributed to the lateral sitting and terminal sitting.
We propose a privacy protective avatar mediated distant-care system. The system avateers an elderly person and animates the avatar representation based on their articular angles acquired by a motion capture system. The design of our distant-care system allows us to achieve privacy protected information delivery of the elderly person's bodily movements. We implemented the system in a form of web application with an inexpensive motion-capture device and evaluated the system's communication performance. As the result, we confirmed that the system could draw fluid animation without delay under the condition of 700Kbps network bandwidth. Also, we conducted user study in which we evaluated user comprehension of the avatar movements and user paerception of the elderly person represented by the avatar. As the results, we found that the users could understand broad movements of the avatar and answered the elderly person represented by the avatar with inexpensive motion-capature device was as likable as the elderly person when used high-precesion motion-capture device.
Wearable devices with accelerometer, gyroscope and so on are available wherever and whenever users need them. In this study, we construct the head gesture recognition system using wearable sensor with accelerometer. We use a glasses-like wearable sensor to provide easy-to-use system. In order to recognize users head gestures such as "nodding", "shaking" and so on, we extract feature vectors using principal component analysis (PCA) for the acceleration timeseries data and then classify the data by k-nearest neighbor (k-NN) or multi-layer perceptron (MLP) classification method. Moreover, we realize the real-time head gesture recognition system. Through the experiments of the head gesture recognition for the multiple users, we confirm the effectiveness of our system.